Home Case Index All Cases GST GST + AAAR GST - 2020 (11) TMI AAAR This
Forgot password New User/ Regiser ⇒ Register to get Live Demo
2020 (11) TMI 956 - AAAR - GSTClassification of services - Online Information and Database Retrieval Services - Type-3 test administrative solution offered by the Respondent Company to its clients in India - minimum human intervention - levy of integrated tax on the supply of said services to non-taxable online recipients in India - lower Authority had held that the Type-3 test does not qualify for classification as OIDAR service - HELD THAT - There is no dispute on the fact that there is an element of human intervention involved in the process of scoring the essay responses in the Type-3 test. What needs to be decided is whether the extent of human intervention is minimum or not. Since there are no guidelines in Indian laws regarding the concept of minimum human intervention in electronically provided services, we refer to the European Commission VAT Committee Working Paper No 896 wherein the notion of minimal human intervention was discussed in the context of determining whether or not a service can be said to fall within the definition of electronically supplied services. The European VAT Committee had agreed that for the assessment of the notion of minimal human intervention , it is the involvement on the side of the supplier which is relevant and not that on the side of the customer. We have already detailed the entire process involved in conducting the Type-3 test and it is seen that scoring by a human scorer is just one of the processes involved in a computer-based test. One of the major benefits of a computer based test is the facility of obtaining immediate grading. While grading of multiple-choice questions is done instantaneously using an algorithm, grading of essays involves the use of AES (Automated Essay Scoring) which is a specialized computer program to assign grades to essays. The Respondent has an entity in the United States which has developed an AES for reliable scoring of essay responses in a computer-based test. How does one know that the automatic scoring system works well enough to give scores consistent with consensus scores from human scorers? Any method of assessment must be judged on validity, fairness and reliability. An AES would be considered valid if it measures the trait that it purports to measure and it would be considered reliable if its outcome is repeatable. Before computers entered the picture, essays were typically given scores by two trained human raters. If the scores differed by more than one point, a more experienced third rater would settle the disagreement. In this system, reliability was measured by the degree of agreement among the human raters. The same principle applies to measuring a computer program s performance in scoring essays. The focus here is on a computer-based test where the intent is to also assess the performance of the candidate using an automated system. The reliability of the AES is validated by the near agreement to the score given by the human scorer. For this reason, we hold that the involvement of the human element in the assessment of essay responses is well within the realm of minimum human intervention . Further, even from the perspective of the candidate, the human involvement is minimum in the entire process of the Type-3 computer-based test starting from the manner of registering for the test, the actual test-process and the outcome of the test, as all stages are automated. The Respondent accepts the electronic request for a rescore of the essay and returns the result to the candidate electronically. The candidate who is the service receiver has received a fully digitally provided service. When the Type-3 computer-based test is viewed as a whole, the scoring done by the human scorer is to be regarded as being within the realm of minimum human intervention. As such the ingredient of minimum human intervention required to classify the service as OIDAR is also satisfied. The decision of the lower Authority that the Type-3 test is not an OIDAR service, cannot be accepted - service provided for the Type-3 test is classifiable as an OIDAR service.
Issues Involved:
1. Classification of Type 2 and Type 3 tests as 'Online Information and Database Retrieval Services' (OIDAR). 2. Liability to pay integrated tax on Type 2 and Type 3 tests if not classified as OIDAR services. Detailed Analysis: 1. Classification of Type 2 and Type 3 Tests as OIDAR Services: Type 2 Test: The Authority for Advance Ruling (AAR) concluded that Type 2 tests, which involve candidates going to a test center, being verified by an administrator, and monitored by an invigilator, qualify as OIDAR services. This is because the scoring is entirely automated, and the results are provided electronically without human intervention. Type 3 Test: The AAR held that Type 3 tests do not qualify as OIDAR services. These tests involve both multiple-choice questions (MCQs) and essay-based questions. While the MCQs are scored automatically, the essay-based questions are evaluated by both an automated system and human evaluators. The AAR determined that the human intervention in scoring the essay-based questions was more than minimal, thus disqualifying Type 3 tests from being classified as OIDAR services. 2. Liability to Pay Integrated Tax on Type 2 and Type 3 Tests: For Type 2 and Type 3 tests, the respondent company argued that the place of supply would be India, making it an import of services. As per Notification No. 10/2017-Integrated Tax (Rate), services supplied by a person in a non-taxable territory to a person other than a non-taxable online recipient are taxable under the reverse charge mechanism. However, Notification No. 9/2017-Integrated Tax (Rate) exempts services (other than OIDAR) supplied to individuals for non-commercial purposes. Thus, the respondent contended that Type 2 and Type 3 tests would either be exempt or taxable under the reverse charge mechanism. Appellant's Arguments: The Department contested the AAR's ruling on Type 3 tests, arguing that the human intervention in evaluating essay-based questions is minimal and primarily for quality assurance. They cited the European Commission VAT Committee guidelines, distinguishing between passive and active human intervention, and contended that the human intervention in Type 3 tests is passive and minimal. The Department also referenced the Board's Circular 2016 and the European Commission's opinion on online education services to support their argument that Type 3 tests should be classified as OIDAR services. Respondent's Arguments: The respondent maintained that the human evaluation in Type 3 tests is substantial and critical for the final score, making the human intervention more than minimal. They argued that the human scorer's role is not merely for quality checking but is integral to the scoring process, thus disqualifying Type 3 tests from being classified as OIDAR services. Discussion and Findings: The Appellate Authority examined the nature of Type 3 tests, noting that the process involves both automated and human scoring of essay-based questions. They referred to the European Commission VAT Committee guidelines, which emphasize the supplier's involvement in determining minimal human intervention. The Authority concluded that the human intervention in Type 3 tests is within the realm of minimal human intervention, as the human scorer's role is primarily to ensure the reliability of the automated scoring system. The entire process, from registration to result delivery, is automated, and the human intervention is limited to validating the automated scores. Conclusion: The Appellate Authority allowed the Department's appeal, setting aside the AAR's ruling regarding Type 3 tests. They held that Type 3 tests qualify as OIDAR services, fulfilling all four essential ingredients: delivery over the internet, automated supply, minimal human intervention, and reliance on information technology. The appeal was disposed of on these terms.
|