<resource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://datacite.org/schema/kernel-4" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4.1/metadata.xsd"><identifier identifierType="DOI">10.34934/DVN/1NHGXH</identifier><creators><creator><creatorName nameType="Personal">Decuypere, Anouk</creatorName><givenName>Anouk</givenName><familyName>Decuypere</familyName><nameIdentifier SchemeURI="https://orcid.org/" nameIdentifierScheme="ORCID">0000-0003-1683-6665</nameIdentifier><affiliation>Universiteit Antwerpen</affiliation></creator><creator><creatorName nameType="Organizational">Anne Van de Vijver</creatorName><nameIdentifier SchemeURI="https://orcid.org/" nameIdentifierScheme="ORCID">0000-0002-8650-1281</nameIdentifier><affiliation>Universiteit Antwerpen</affiliation></creator></creators><titles><title>Fairness perceptions of AI use by tax administration</title><title titleType="Subtitle">Students, professionals and a representative Flemish sample</title></titles><publisher>Social Sciences and Digital Humanities Archive – SODHA</publisher><publicationYear>2024</publicationYear><subjects><subject>Social Sciences</subject><subject schemeURI="https://vocabularies.cessda.eu/vocabulary/TopicClassification" subjectScheme="CESSDA Topic Classification">Fairnes</subject></subjects><contributors><contributor contributorType="ContactPerson"><contributorName nameType="Personal">Decuypere, Anouk</contributorName><givenName>Anouk</givenName><familyName>Decuypere</familyName><affiliation>Universiteit Antwerpen</affiliation></contributor></contributors><dates><date dateType="Submitted">2024-03-05</date><date dateType="Updated">2024-03-28</date></dates><resourceType resourceTypeGeneral="Dataset">Survey data</resourceType><sizes><size>69827</size><size>71300</size><size>926438</size><size>966134</size></sizes><formats><format>text/csv</format><format>text/tab-separated-values</format><format>text/csv</format><format>text/tab-separated-values</format></formats><version>1.1</version><rightsList><rights rightsURI="info:eu-repo/semantics/restrictedAccess"/><rights/></rightsList><descriptions><description descriptionType="Abstract">We tested whether the proportion of AI versus auditors in fraud selection matters for fairness, and whether there is an impact of transparency (explanations). We found that a higher proportion of AI was more procedurally fair, mostly through bias suppression and consistency, and that the attitude toward AI and trust in the administration explained most variance. Transparency (explanations) had no impact. We also found two small negative interaction effects concerning trust and procedural fairness: with high trust in the tax administration, fairness increased less (as AI increased). Conversely, with low trust, fairness increased more (as AI increased). 

Dataset 1 was used for the pilot (with students and professionals)
Dataset 2 was a representative dataset for the Flemish population.</description></descriptions><geoLocations/><fundingReferences><fundingReference><funderName>Belgian National Bank</funderName><awardNumber>/</awardNumber></fundingReference><fundingReference><funderName>Research Foundation Flanders</funderName><awardNumber>G043723N</awardNumber></fundingReference></fundingReferences></resource>