Tag: Data

  • Institutions may be holding themselves back by not sharing enough data

    Institutions may be holding themselves back by not sharing enough data

    Wonkhe readers need little persuasion that information flows are vital to the higher education sector. But without properly considering those flows and how to minimise the risk of something going wrong, institutions can find themselves at risk of substantial fines, claims and reputational damage. These risks need organisational focus from the top down as well as regular review.

    Information flows in higher education occur not only in teaching and research but in every other area of activity such as accommodation arrangements, student support, alumni relations, fundraising, staff and student complaints and disciplinary matters. Sometimes these flows are within organisations, sometimes they involve sharing data externally.

    Universities hold both highly sensitive research information and personal data. Examples of the latter include information about individuals’ physical and mental health, family circumstances, care background, religion, financial information and a huge range of other personal information.

    The public narrative on risks around data tend to focus on examples of inadvertently sharing protected information – such as in the recent case of the Information Commissioner’s decision to fine the Police Service of Northern Ireland £750,000 in relation to the inadvertent disclosure of personal information over 9,000 officers and staff in response to a freedom of information request. The same breach has also resulted in individuals bringing legal claims against the PSNI, with media reports suggesting a potential bill for those at up to £240m.

    There is also the issue of higher education institutions being a target for cyber attack by criminal and state actors. Loss of data through such attacks again has the potential to result in fines and other regulatory action as well as claims by those affected.

    Oversharing and undersharing

    But inadvertent sharing of information and cyberattacks are not the only areas of risk. In some circumstances a failure to ensure that information is properly collected and shared lawfully may also be a risk. And ensuring effective and appropriate flows of information to the governing body is key to it being able to fulfil its oversight function.

    One aspect of the tragic circumstances mentioned in the High Court appeal ruling in the case concerning Natasha Abrahart is the finding that there had been a failure to pass on information about a suicide attempt to key members of staff, which might have enabled action to be taken to remove pressure on Natasha.

    Another area of focus concerns sharing of information related to complaints of sexual harassment and misconduct and subsequent investigations. OfS Condition E6 and its accompanying guidance which comes fully into effect on 1 August 2025 includes measures on matters such as reporting potential complaints and the sensitive handling and fair use of information. The condition and guidance require the provider to set out comprehensively and in an easy to understand manner how it ensures that those “directly affected” by decisions are directly informed about those decisions and the reasons for them.

    There are also potential information flows concerning measures intended to protect students from any actual or potential abuse of power or conflict of interest in respect of what the condition refers to as “intimate personal relationships” between “relevant staff members” and students.

    All of these data flows are highly sensitive and institutions will need to ensure that appropriate thought is given to policies, procedures and systems security as well as identifying the legal basis for collecting, holding and sharing information, taking appropriate account of individual rights.

    A blanket approach will not serve

    Whilst there are some important broad principles in data protection law that should be applied when determining the legal basis for processing personal data, in sensitive cases like allegations of sexual harassment the question of exactly what information can be shared with another person involved in the process often needs to be considered against the particular circumstances.

    Broadly speaking in most cases where sexual harassment or mental health support is concerned, the legislation will require at minimum both a lawful basis and a condition for processing “special category” and/or data that includes potential allegations of a criminal act. Criminal offences and allegations data and special category data (which includes data relating to an individual’s health, sex life and sexual orientation) are subject to heightened controls under the legislation.

    Without getting into the fine detail it can often be necessary to consider individuals’ rights and interests in light of the specific circumstances. This is brought into sharp focus when considering matters such as:

    • Sharing information with an emergency contact in scenarios that might fall short of a clear “life or death” situation.
    • Considering what information to provide to a student who has made a complaint about sexual harassment by another student or staff member in relation to the outcome of their complaint and of any sanction imposed.

    It’s also important not to forget other legal frameworks that may be relevant to data flows. This includes express or implied duties of confidentiality that can arise where sensitive information is concerned. Careful thought needs to be given to make clear in relevant policies and documents when it is envisaged that information might need to be shared, and provided the law permits it.

    A range of other legal frameworks can also be relevant, such as consumer law, equality law and freedom of information obligations. And of course, aside from the legal issues, there will be potential reputational and institutional risks if something does go wrong. It’s important that senior management and governing bodies have sufficient oversight and involvement to encourage a culture of organisational awareness and compliance across the range of information governance issues that can arise.

    Managing the flow of information

    Institutions ought to have processes to keep their data governance under review, including measures that map out the flows and uses of data in accordance with relevant legal frameworks. The responsibility for oversight of data governance lies not only with any Data Protection Officer, but also with senior management and governors who can play a key part in ensuring a good data governance culture within institutions.

    Compliance mechanisms also need regular review and refresh including matters such as how privacy information is provided to individuals in a clear and timely way. Data governance needs to be embedded throughout the lifecycle of each item of data. And where new activities, policies or technologies are being considered, data governance needs to be a central part of project plans at the earliest stages to ensure that appropriate due diligence and other compliance requirements are in place, such as data processing agreements or data protection impact assessments are undertaken.

    Effective management of the flow ensures that the right data gets in front of the right people, at the right time – and means everyone can be confident the right balance has been struck between maintaining privacy and sharing vital information.

    This article is published in association with Mills & Reeve.

    Source link

  • Data futures, reviewed | Wonkhe

    Data futures, reviewed | Wonkhe

    As a sector, we should really have a handle on how many students we have and what they are like.

    Data Futures – the multi-year programme that was designed to modernise the collection of student data – has become, among higher education data professionals, a byword for delays, stress, and mixed messages.

    It was designed to deliver in year data (so 2024-25 data arriving within the 2024-25 academic year) three times a year, drive efficiency in data collection (by allowing for process streamlining and automation), and remove “data duplication” (becoming a single collection that could be used for multiple purposes by statutory customers and others). To date it has achieved none of these benefits, and has instead (for 2022-23 data) driven one of the sectors’ most fundamental pieces of data infrastructure into such chaos that all forward uses of data require heavy caveats.

    The problem with the future

    In short – after seven years of work (at the point the review was first mooted), and substantial investment, we are left with more problems than we started with. Most commentary has focused on four key difficulties:

    • The development of the data collection platform, starting with Civica in 2016 and later taken over by Jisc, has been fraught with difficulties, frequently delayed, and experienced numerous changes in scope
    • The documentation and user experience of the data collection platform has been lacking. Rapid changes have not resulted in updates for those who use the platform within providers, or those who support those providers (the HESA Liaison team). The error handling and automated quality rules have caused particular issues – indeed the current iteration of the platform still struggles with fields that require responses involving decimal fractions.
    • The behavior of some statutory customers – in frequently modifying requirements, changing deadlines, and putting unhelpful regulatory pressure on providers, has not helped matters.
    • The preparedness of the sector has been inconsistent between providers and between software vendors. This level of preparedness has not been fully understood – in part because of a nervousness among providers around regulatory consequences for late submissions.

    These four interlinked strands have been exacerbated by an underlying fifth issue:

    • The quality of programme management, programme delivery, and programme documentation has not been of the standards required for a major infrastructure project. Parts of this have been due to problems in staffing, and problems in programme governance – but there are also reasonable questions to be asked about the underlying programme management process.

    Decisions to be made

    An independent review was originally announced in November 2023, overlapping a parallel internal Jisc investigation. The results we have may not be timely – the review didn’t even appear to start until early 2024 – but even the final report merely represents a starting point for some of the fundamental discussions that need to happen about sector data.

    I say a “starting point” because many of the issues raised by the review concern decisions about the projected benefits of doing data futures. As none of the original benefits of the programme have been realised in any meaningful way, the future of the programme (if it has one) needs to be focused on what people actually want to see happen.

    The headline is in-year data collection. To the external observer, it is embarrassing that while other parts of the education sector can return data on a near-real time basis – universities update the records they hold on students on a regular basis so it should not be impossible to update external data too. It should not come as a surprise that when the review poses the question:

    As a priority, following completion of the 2023-24 data collection, the Statutory Customers (with the help of Jisc) should revisit the initial statement of benefits… in order to ascertain whether a move to in-year data collection is a critical dependent in order to deliver on the benefits of the data futures programme.

    This isn’t just an opportunity for regulators to consider their shopping list – a decision to continue needs to be swiftly followed by a cost-benefit analysis, reassessing the value of in-year collection and determining whether or when to pursue in-year collection. And the decision is that there will, one day, be in-year student data. In a joint statement the four statutory customers said:

    After careful consideration, we intend to take forward the collection of in-year student data

    highlighting the need for data to contribute to “robust and timely regulation”, and reminding institutions that they will need “adequate systems in place to record and submit student data on time”.

    The bit that interests me here is the implications for programme management.

    Managing successful programmes

    If you look at the government’s recent record in delivering large and complex programmes you may be surprised to learn of the existence of a Government Functional Standard covering portfolio, programme, and project management. What’s a programme? Well:

    A programme is a unique, temporary, flexible organisation created to co-ordinate, direct and oversee the implementation of a set of projects and other related work components to deliver outcomes and benefits related to a set of strategic objectives

    Language like this, and the concepts underpinning it come from what remains the gold standard programme management methodology, Managing Successful Programmes (MSP). If you are more familiar with the world of project management (project: “a unique temporary management environment, undertaken in stages, created for the purpose of delivering one or more business products or outcomes”) it bears a familial resemblance to PRINCE2.

    If you do manage projects for a living, you might be wondering where I have been for the last decade or so. The cool kids these days are into a suite of methodologies that come under the general description of “agile” – PRINCE2 these days is seen primarily as a cautionary tale: a “waterfall” (top down, documentation centered, deadline focused) management practice rather than an “iterative” (emergent, development centered, short term) one.

    Each approach has strengths and weaknesses. Waterfall methods are great if you want to develop something that meets a clearly defined need against clear milestones and a well understood specification. Agile methods are a nice way to avoid writing reports and updating documentation.

    Data futures as a case study

    In the real world, the distinction is less clear cut. Most large programmes in the public sector use elements of waterfall methods (regular project reports, milestones, risk and benefits management, senior responsible owners, formal governance) as a scaffold in which sit agile elements at a more junior level (short development cycle, regular “releases” of “product” prioritised above documentation). While this can be done well it is very easy for the two ideologically separate approaches to drift apart – and it doesn’t take much to read this into what the independent review of data futures reveals.

    Recommendation B1 calls, essentially, for clarity:

    • Clarity of roles and responsibilities
    • Clarity of purpose for the programme
    • Clarity on the timetable, and on how and when the scope of the programme can be changed

    This is amplified by recommendation C1, which looks for specific clarifications around “benefits realisation” – which itself underpins the central recommendation relating to in-year data.

    In classic programme management (like MSP) the business case will include a map of programme benefits: that is, all of the good things that will come about as a result of the hard work of the programme. Like the business case’s risk register (a list of all the bad things that might happen and what can be done if they did) it is supposed to be regularly updated and signed off by the Programme Board – which is made up of the most senior staff responsible for the work of the programme (the Senior Responsible Owners) in the lingo.

    The statement of benefits languished for some time without a full update (there was an incomplete attempt in February 2023, and a promise to make another one after the completed 2022-23 collection – we are not told whether the second had happened). In proper, grown-up, programme management this is supposed to be done in a systematic way: every programme board meeting you review the benefits and the risk register. It’s dull (most of the time!) but it is important. The board needs an eye on whether the programme still offers value overall (based on an analysis of projected benefits). And if the scope needed to change, the board would have final say on that.

    The issue with Data Futures was clarity over whether this level of governance actually had the power to do these things, and – if not – who was actually doing them. The Office for Students latterly put together quite a complex and unwieldy governance structure, with a quarterly review board having oversight of the main programme board. This QRB was made up of very senior staff at the statutory customers (OfS, HEFCW, SFC, DoE(NI)), Jisc, and HESA (plus one Margaret Monckton – now chair of this independent review! – as an external voice).

    The QRB oversaw the work of the programme board – meaning that decisions made by the senior staff nominally responsible for the direction of the programme were often second guessed by their direct line managers. The programme board was supposed to have its own assurance function and an independent observer – it did not (despite the budget being there for it).

    Stop and go

    Another role of the board is to make what are more generally called “stop-go” decisions, and are here described as “approval to proceed”. This is an important way of making sure the programme is still on track – you’d set (in advance) the criteria that needed to be fulfilled in terms of delivery (was the platform ready, had the testing been done) before you moved on to the next work package. Below this, incremental approvals are made by line managers or senior staff as required, but reported upwards to the board.

    What seems to have happened a lot in the Data Futures programme is what’s called conditional approvals – where some of these conditions were waived based on assurances that the remaining required work was completed. This is fine as it goes (not everything lines up all the time) but as the report notes:

    While the conditions of the approvals were tracked in subsequent increment approval documents, they were not given a deadline, assignee or accountable owner for the conditions. Furthermore, there were cases where conditions were not met by the time of the subsequent approval

    Why would you do that? Well, you’d be tempted if you had another board above you – comprising very senior staff and key statutory customers – concerned about the very public problems with Data Futures and looking for progress. The Quarterly Review Board (QRB) as it turned out, only actually ended up making five decisions (and in three of these cases it just punted the issue back down to the programme board – the other two, for completists, were to delay plans for in-year collection).

    What it was meant to be doing was “providing assurance on progress”, “acting as an escalation point” and “approving external assurance activities”. As we’ve already seen, it didn’t really bother with external assurance. And on the other points the review is damning:

    From the minutes provided, the extent to which the members of the QRG actively challenged the programme’s progress and performance in the forum appears to be limited. There was not a clear delegation of responsibilities between the QRG, Programme Board and other stakeholders. In practice, there was a lack of clarity also on the role of the Data Futures governance structure and the role of the Statutory Customers separately to the Data Futures governance structure; some decisions around the data specification were taken outside of the governance structure.

    Little wonder that the section concludes:

    Overall, the Programme Board and QRG were unable to gain an independent, unbiased view on the progress and success of the project. If independent project assurance had been in place throughout the Data Futures project, this would have supported members of the Programme Board in oversight of progress and issues may have been raised and resolved sooner

    Resourcing issues

    Jisc, as developer, took on responsibility for technical delivery in late 2019. Incredibly, Jisc was not provided with funding to do this work until March 2020.

    As luck would have it, March 2020 saw the onset of a series of lockdowns and a huge upswing in demand for the kind of technical and data skills needed to deliver a programme like data futures. Jisc struggled to fill key posts, most notably running for a substantive period of time without a testing lead in post.

    If you think back to the 2022-23 collection, the accepted explanation around the sector for what – at heart – had gone wrong was a failure to test “edge cases”. Students, it turns out, are complex and unpredictable things – with combinations of characteristics and registrations that you might not expect to find. A properly managed programme of testing would have focused on these edge cases – there would have been less issues faced when the collection went live.

    Underresourcing and understaffing are problems in their own right, but these were exacerbated by rapidly changing data model requirements, largely coming from statutory customers.

    To quote the detail from from the report:

    The expected model for data collection under the Data Futures Programme has changed repeatedly and extensively, with ongoing changes over several years on the detail of the data model as well as the nature of collection and the planned number of in-year collections. Prior to 2020, these changes were driven by challenges with the initial implementation. The initial data model developed was changed substantially due to technical challenges after a number of institutions had expended significant time and resource working to develop and implement it. Since 2020, these changes were made to reflect evolving requirements of the return from Statutory Customers, ongoing enhancements to the data model and data specification and significantly, the ongoing development of quality rules and necessary technical changes determined as a result of bugs identified after the return had ‘gone live’. These changes have caused substantial challenges to delivery of the Data Futures Programme – specifically reducing sector confidence and engagement as well as resulting in a compressed timeline for software development.

    Sector readiness

    It’s not enough to conjure up a new data specification and platform – it is hugely important to be sure that your key people (“operational contacts”) within the universities and colleges that would be submitting data are ready.

    On a high level, this did happen – there were numerous surveys of provider readiness, and the programme also worked with the small number of software vendors that supply student information systems to the sector. This formal programme communication came alongside the more established links between the sector and the HESA Liaison team.

    However, such was the level of mistrust between universities and the Office for Students (who could technically have found struggling providers in breach of condition of registration F4), that it is widely understood that answers to these surveys were less than honest. As the report says:

    Institutions did not feel like they could answer the surveys honestly, especially in instances where the institution was not on track to submit data in line with the reporting requirements, due to the outputs of the surveys being accessible to regulators/funders and concerns about additional regulatory burden as a result.

    The decision to scrap a planned mandatory trial of the platform, made in March 2022 by the Quarterly Review Group, was ostensibly made to reduce burden – but, coupled with the unreliable survey responses, this meant that HESA was unable to identify cases where support was needed.

    This is precisely the kind of risk that should have been escalated to programme board level – a lack of transparency between Jisc and the board about readiness made it harder to take strategic actions on the basis of evidence about where the sector really was. And the issue continued into live collection – because Liaison were not made aware of common problems (“known issues”, in fact) the team often struggled with out-of-date documentation: meaning that providers got conflicting messages from different parts of Jisc.

    Liaison, on their part, dealt with more than 39,000 messages between October and December 2023 (during the peak of issues raised during the collection process) – even given the problems noted above they resolved 61 per cent of queries on the first try. Given the level of stress in the sector (queries came in at all hours of the day) and the longstanding and special relationship that data professionals have with HESA Liasion, you could hardly criticise that team for making the best of a near-impossible situation.

    I am glad to see that the review notes:

    The need for additional staff, late working hours, and the pressure of user acceptance testing highlights the hidden costs and stress associated with the programme, both at institutions and at Jisc. Several institutions talked about teams not being able to take holidays over the summer period due to the volume of work to be delivered. Many of the institutions we spoke to indicated that members of their team had chosen to move into other roles at the institution, leave the sector altogether, experienced long term sickness absence or retired early as a result of their experiences, and whilst difficult to quantify, this will have a long-term impact on the sector’s capabilities in this complex and fairly niche area.

    Anyone who was even tangentially involved in the 2022-23 collection, or attended the “Data Futures Redux” session at the Festival of Higher Education last year, will find those words familiar.

    Moving forward

    The decision on in-year data has been made – it will not happen before the 2026-27 academic year, but it will happen. The programme delivery and governance will need to improve, and there are numerous detailed recommendations to that end: we should expect more detail and the timeline to follow.

    It does look as though there will be more changes to the data model to come – though the recommendation is that this should be frozen 18 months before the start of data collection which by my reckoning would mean a confirmed data model printed out and on the walls of SROC members in the spring of 2026. A subset of institutions would make an early in-year submission, which may not be published to “allow for lower than ideal data quality”.

    On arrangements for collections for 2024-25 and 2025-26 there are no firm recommendations – it is hoped that data model changes will be minimal and the time used to ensure that the sector and Jisc are genuinely ready for the advent of the data future.

    Source link

  • Comparative Data on Race & Ethnicity in Education Abroad by Percentage of Students [2025]

    Comparative Data on Race & Ethnicity in Education Abroad by Percentage of Students [2025]

    References

     

    American Association of Community Colleges. (2024). AACC Fast Facts 2024. https://www.aacc.nche.edu/researchtrends/fast-facts/

     

    Fund for Education Abroad (FEA). (2024, December). Comparative Data on Race & Ethnicity of FEA Awards 20222023 by Percentage of Students. Data obtained from Joelle Leinbach, Program Manager at the Fund for Education Abroad. https://fundforeducationabroad.org/  

     

    Institute of International Education. (2024). Profile of U.S. Study Abroad Students, 2024 Open Doors U.S. Student Data. https://opendoorsdata.org/data/us-study-abroad/student-profile/  

     

    Institute for International Education. (2024). Student Characteristics: U.S. Students Studying Abroad at Associate’s Colleges Data from the 2024 Open Doors Report. https://opendoorsdata.org/data/us-study-abroad/community-college-student-characteristics/

     

    Institute for International Education. (2022, May) A Legacy of Supporting Excellence and Opportunity in Study Abroad: 20-Year Impact Study, Comprehensive Report. Benjamin A. Gilman International Scholarship. https://www.gilmanscholarship.org/program/program-statistics/ 

     

    United States Census Bureau. (2020). DP1 | Profile of General Population and Housing Characteristics, 2020: DEC Demographic Profile. https://data.census.gov/table?g=010XX00US&d=DEC+Demographic+Profile  

     

    U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics. (2023, August). Characteristics of Postsecondary Students. https://nces.ed.gov/programs/coe/indicator/csb/postsecondarystudents

    Bibliography of Literature, Presentations & Curriculum Integration Projects Incorporating the Comparative Data Table on Race & Ethnicity in Education Abroad

    Comp, D. & Bakkum, N. (2025, January). Study Away/Abroad for All Students! – Who Studies Away/Abroad at Columbia College? Invited presentation for faculty at the Winter 2025 Faculty and Staff Development Days at Columbia College Chicago.

    Lorge, K. & Comp, D. (2024, April). A Case for Simple and Comparable Data to Assess Race and Ethnicity in Education Abroad. The Global Impact Exchange: Publication of Diversity Abroad. Spring 2024. https://www.diversityabroad.org/GlobalImpactExchange 

    Comp, D. (2019). Effective Utilization of Data for Strategic Planning and Reporting with Case Study: My Failed Advocacy Strategy. In. A.C. Ogden, L.M. Alexander, & Mackintosh, E. (Eds.). Education Abroad Operational Management: Strategies, Opportunities, and Innovations, A Report on ISA ThinkDen, 72-75. Austin, TX: International Studies Abroad. https://educationaltravel.worldstrides.com/rs/313-GJL-850/images/ISA%20ThinkDen%20Report%202018.pdf  

    Comp, D. (2018, July). Effective Utilization of Data for Strategic Planning and Reporting in Education Abroad. Invited presentation at the ISA ThinkDen at the 2018 ThinkDen meeting, Boulder CO.

    Comp, D. (2010). Comparative Data on Race and Ethnicity in Education Abroad. In Diversity in International Education Hands-On Workshop: Summary Report and Data from the Workshop held on September 21, 2010, National Press Club, Washington, D.C. (pp. 19-21). American Institute For Foreign Study. https://www.aifsabroad.com/publications/

    Stallman, E., Woodruff, G., Kasravi, J., & Comp, D. (2010, March). The Diversification of the Student Profile. In W.W. Hoffa & S. DePaul (Eds.). A History of US Study Abroad: 1965 to Present, 115-160. Carlisle, PA: The Forum on Education Abroad/Frontiers: The Interdisciplinary Journal of Study Abroad.

    Comp, D., & Woodruff, G.A. (2008, May). Data and Research on U.S. Multicultural Students in Study Abroad. Co-Chair and presentation at the 2008 NAFSA Annual Conference, Washington, D.C.

    Comp, D.  (2008, Spring). U.S. Heritage-Seeking Students Discover Minority Communities in Western Europe.  Journal of Studies in International Education, 12 (1), 29-37.

    Comp, D.  (2007). Tool for Institutions & Organizations to Assess Diversity of Participants in Education Abroad. Used by the University of Minnesota Curriculum Integration Project.

    Comp, D. (2006). Underrepresentation in Education Abroad – Comparative Data on Race and Ethnicity. Hosted on the NAFSA: Association of International Educators, “Year of Study Abroad” website.

    Comp, D. (2005, November). NAFSA: Association of International Educators Subcommittee on Underrepresentation in Education Abroad Newsletter, 1 (2), 6.

    Past IHEC Blog posts about the Comparative Data Table on Race & Ethnicity in Education Abroad

    Tool for Institutions & Organizations to Assess Diversity of Participants in Education Abroad [February 15, 2011]

    How Do We Diversify The U.S. Study Abroad Student Population? [September 21, 2010]

    How do we Diversify the U.S. Study Abroad Student Profile? [December 8, 2009]

    Source link

  • Crafting technology-driven IEPs

    Crafting technology-driven IEPs

    Key points:

    Individualized Education Plans (IEP) have been the foundation of special education for decades, and the process in which these documents are written has evolved over the years.

    As technology has evolved, writing documents has also evolved. Before programs existed to streamline the IEP writing process, creating IEPs was once a daunting task of paper and pencil. Not only has the process of writing the IEP evolved, but IEPs are becoming technology-driven.

    Enhancing IEP goal progress with data-driven insights using technology: There are a variety of learning platforms that can monitor a student’s performance in real-time, tailoring to their individual needs and intervening areas for improvement. Data from these programs can be used to create students’ annual IEP goals. This study mentions that the ReadWorks program, used for progress monitoring IEP goals, has 1.2 million teachers and 17 million students using its resources, which provide content, curricular support, and digital tools. ReadWorks is free and provides all its resources free of charge and has both printed and digital versions of the material available to teachers and students (Education Technology Nonprofit, 2021).

    Student engagement and involvement with technology-driven IEPs: Technology-driven IEPs can also empower students to take an active role in their education plan. According to this study, research shows that special education students benefit from educational technology, especially in concept teaching and in practice-feedback type instructional activities (Carter & Center, 2005; Hall, Hughes & Filbert, 2000; Hasselbring & Glaser, 2000). It is vital for students to take ownership in their learning. When students on an IEP reach a certain age, it is important for them to be the active lead in their plan. Digital tools that are used for technology-driven IEPs can provide students with visual representations of their progress, such as dashboards or graphs. When students are given a visual representation of their progress, their engagement and motivation increases.

    Technology-driven IEPs make learning fun: This study discusses technology-enhanced and game based learning for children with special needs. Gamified programs, virtual reality (VR), and augmented reality (AR) change the learning experience from traditional to transformative. Gamified programs are intended to motivate students with rewards, personalized feedback, and competition with leaderboards and challenges to make learning feel like play. Virtual reality gives students an immersive experience that they would otherwise only be able to experience outside of the classroom. It allows for deep engagement and experiential learning via virtual field trips and simulations, without the risk of visiting dangerous places or costly field trip fees that not all districts or students can afford. Augmented reality allows students to visualize abstract concepts such as anatomy or 3D shapes in context. All these technologies align with technology-driven IEPs by providing personalized, accessible, and measurable learning experiences that address diverse needs. These technologies can adapt to a student’s individual skill level, pace, and goals, supporting their IEP.

    Challenges with technology-driven IEPs: Although there are many benefits to
    technology-driven IEPs, it is important to address the potential challenges to ensure equity across school districts. Access to technology in underfunded school districts can be challenging without proper investment in infrastructures, devices, and network connection. Student privacy and data must also be properly addressed. With the use of technologies for technology-driven IEPs, school districts must take into consideration laws such as the Family Educational Rights and Privacy Act (FERPA).

    The integration of technology into the IEP process to create technology-driven IEPs represents a shift from a traditional process to a transformative process. Technology-driven IEPs create more student-centered learning experiences by implementing digital tools, enhancing collaboration, and personalized learning experiences. These learning experiences will enhance student engagement and motivation and allow students to take control of their own learning, making them leaders in their IEP process. However, as technology continues to evolve, it is important to address the equity gap that may arise in underfunded school districts.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • Why Data Alone Won’t Improve Retention – Faculty Focus

    Why Data Alone Won’t Improve Retention – Faculty Focus

    Source link

  • Keep talking about data | Wonkhe

    Keep talking about data | Wonkhe

    How’s your student data this morning?

    Depending on how close you sit to your institutional student data systems, your answer may range from a bemused shrug to an anguished yelp.

    In the most part, we remain blissfully unaware of how much work it currently takes to derive useful and actionable insights from the various data traces our students leave behind them. We’ve all seen the advertisements promising seamless systems integration and a tangible improvement in the student experience, but in most cases the reality is far different.

    James Gray’s aim is to start a meaningful conversation about how we get there and what systems need to be in place to make it happen – at a sector as well as a provider level. As he says:

    There is a genuine predictive value in using data to design future solutions to engage students and drive improved outcomes. We now have the technical capability to bring content, data, and context together in a way that simply has not been possible before now.”

    All well and good, but just because we have the technology doesn’t mean we have the data in the right place or the right format – the problem is, as Helen O’Sullivan has already pointed out on Wonkhe, silos.

    Think again about your student data.

    Some of it is in your student information system (assessment performance, module choices), which may or may not link to the application tracking systems that got students on to courses in the first place. You’ll also have information about how students engage with your virtual learning environment, what books they are reading in the library, how they interact with support services, whether and how often they attend in person, and their (disclosed) underlying health conditions and specific support needs.

    The value of this stuff is clear – but without a whole-institution strategic approach to data it remains just a possibility. James notes that:

    We have learned that a focus on the student digital journey and institutional digital transformation means that we need to bring data silos together, both in terms of use and collection. There needs to be a coherent strategy to drive deployment and data use.

    But how do we get there? From what James has seen overseas, in the big online US providers like Georgia Tech and Arizona State data is something that is managed strategically at the highest levels of university leadership. It’s perhaps a truism to suggest that if you really care about something it needs ownership at a senior level, but having that level of buy-in unlocks the resource and momentum that a big project like this needs.

    We also talked about the finer-grained aspects of implementation – James felt that the way to bring students and staff on board is to clearly demonstrate the benefits, and listen (and respond) to concerns. That latter is essential because “you will annoy folks”.

    Is it worth this annoyance to unlock gains in productivity and effectiveness? Ideally, we’d all be focused on getting the greatest benefit from our resources – but often processes and common practices are arranged in sub-optimal ways for historical reasons, and rewiring large parts of someone’s role is a big ask. The hope is that the new way will prove simpler and less arduous, so it absolutely makes sense to focus on potential benefits and their realisation – and bringing in staff voices at the design stage can make for gains in autonomy and job satisfaction.

    The other end of the problem concerns procurement. Many providers have updated their student records systems in recent years in response to the demands of the Data Futures programme. The trend has been away from bespoke and customised solutions and towards commercial off-the-shelf (COTS) procurement: the thinking here being that updates and modifications are easier to apply consistently with a standard install.

    As James outlines, providers are looking at a “buy, build, or partner” decision – and institutions with different goals (and at different stages of data maturity) may choose different options. There is though enormous value in senior leaders talking across institutions about decisions such as these. “We had to go through the same process” James outlined. “In the end we decided to focus on our existing partnership with Microsoft to build a cutting edge data warehouse, and data ingestion, hierarchy and management process leveraging Azure and MS Fabric with direct connectivity to Gen AI capabilities to support our university customers with their data, and digital transformation journey.” – there is certainly both knowledge and hard-won experience out there about the different trade-offs, but what university leader wants to tell a competitor about the time they spent thousands of pounds on a platform that didn’t communicate with the rest of their data ecosystem?

    As Claire Taylor recently noted on Wonkhe there is a power in relationships and networks among senior leaders that exist to share learning for the benefit of many. It is becoming increasingly clear that higher education is a data-intensive sector – so every provider should feel empowered to make one of the most important decisions they will make in the light of a collective understanding of the landscape.

    This article is published in association with Kortext. Join us at an upcoming Kortext LIVE event in London, Manchester and Edinburgh in January and February 2025 to find out more about Wonkhe and Kortext’s work on leading digital capability for learning, teaching and student success.

    Source link

  • Common App data shows 5% jump in first-year college applicants

    Common App data shows 5% jump in first-year college applicants

    This audio is auto-generated. Please let us know if you have feedback.

     Dive Brief:

    • First-year Common Applications are up 5% year over year, with over 1.2 million prospective students submitting the forms for the 2024-25 application cycle as of Jan. 1, the company said Thursday.
    • First-year applications ticked up across both institution types and student demographics, but some groups saw accelerated growth. Common App found disproportionate increases among students believed to be from low-income households and those who identified as underrepresented minorities. 
    • Applications to public institutions grew by 11% year over year, outpacing the 3% growth seen at private colleges, Thursday’s report said. 

    Dive Insight:

    Applications from prospective first-year students have steadily increased since the 2020-21 application cycle, Common App found. 

    That’s despite the challenges that have thrown aspects of college admissions into tumult, including the botched rollout of the updated Free Application for Federal Student Aid during the 2024-25 cycle and the U.S. Supreme Court’s June 2023 ban on race-conscious admissions.

    Roughly 960,000 students used the Common App portal to submit over 4.8 million applications during the 2020-21 cycle. In the 2024-25 cycle, over 1.2 million users submitted just under 6.7 million applications.

    Prospective students can continue to apply to colleges through the month and beyond. But a majority of applications for the following fall semester are traditionally submitted by the end of December. 

    The number of colleges first-year prospects applied to ticked up slightly between 2020-21 and 2024-25, but remained between five and six institutions. 

    Common App found disproportionate application growth among students from low-income households. The portal does not directly collect household income from applicants, but researchers used students who were eligible for fee waivers as a proxy. Application rates for that group increased by 10%, compared to 2% for their counterparts who weren’t eligible for the waivers.

    Moreover, applications from students in ZIP codes where median incomes fall below the national average grew 9% since the 2023-24 cycle, compared to 4% growth from those in above-median income areas, Common App found.

    The company also saw more applications from minority groups underrepresented in higher education, classified by researchers as those who identify as Black or African American, Latinx, Native American or Alaska Native, or Native Hawaiian or other Pacific Islander.

    As of Jan. 1, 367,000 underrepresented applicants used Common App to submit first-year applications. But their numbers are growing at a faster rate than their counterparts.

    Among students in underrepresented groups, first-year applications grew by 13% since last year, compared to the 2% growth for the others. 

    Latinx and Black or African American candidates drove much of that growth, showing year-over-year increases of 13% and 12%, respectively.

    However, it appears that students are reconsidering their application materials following the 2023 Supreme Court decision. In June, separate Common App research found a decrease in the number of Asian, Black, Latinx and White students referencing race or ethnicity in their college essays.

    Thursday’s report also found more first-year students including standardized test scores in their applications, up 10% since last year. The number of applicants leaving them out remained unchanged year over year.

    “This marks the first time since the 2021–22 season that the growth rate of test score reporters has surpassed that of non-reporters, narrowing the gap between the two groups,” the report said.

    That’s despite interest slowing in highly selective colleges, the type of institutions that have historically most used standardized test scores in the admissions process.

    Applications to colleges with acceptance rates below 25% grew just 2% in 2024-25, Common App found. That’s compared to the between 8% and 9% increases seen at institutions of all other selectivity levels.

    Just 5% of the colleges on Common App required test scores in the 2024-25 application cycle, a slight uptick from the 4% that did so the previous year. 

    COVID-19 pushed many institutions with test requirements to temporarily waive this mandate, and some ultimately made the change permanent.

    But others returned to their original rules. And reversal announcements continue to trickle in, including one from the highly selective University of Miami just this past Friday. 

    Source link

  • This week in numbers: Clearinghouse retracts first-year enrollment data

    This week in numbers: Clearinghouse retracts first-year enrollment data

    We’re rounding up recent stories, including a methodology mea culpa and billions of dollars in discharged loan debt.

    Source link

  • Freshman enrollment up this fall; data error led to miscount

    Freshman enrollment up this fall; data error led to miscount

    Freshman enrollment did not decline this fall, as previously reported in the National Student Clearinghouse Research Center’s annual enrollment report in October. On Monday, the NSC acknowledged that a methodological error led to a major misrepresentation of first-year enrollment trends, and that first-year enrollment appears to have increased.

    The October report showed first-year enrollments fell by 5 percent, in what would have been the largest decline since the COVID-19 pandemic—and appeared to confirm fears that last year’s bungled rollout of a new federal aid form would curtail college access. Inside Higher Ed reported on that data across multiple articles, and it was featured prominently in major news outlets like The New York Times and The Washington Post.

    According to the clearinghouse, the error was a methodological one, caused by mislabeling many first-year students as dual-enrolled high school students. This also led to artificially inflated numbers on dual enrollment; the October report said the population of dually enrolled students grew by 7.2 percent.

    “The National Student Clearinghouse Research Center acknowledges the importance and significance of its role in providing accurate and reliable research to the higher education community,” Doug Shapiro, the center’s executive director, wrote in a statement. “We deeply regret this error and are conducting a thorough review to understand the root cause and implement measures to prevent such occurrences in the future.”

    On Jan. 23, the clearinghouse will release another annual enrollment report based on current term estimates that use different research methodologies.

    The Education Department had flagged a potential issue in the data this fall when its financial aid data showed a 5 percent increase in students receiving federal aid. In a statement, Under Secretary James Kvaal said the department was “encouraged and relieved” by the clearinghouse’s correction.

    Source link

  • The data dark ages | Wonkhe

    The data dark ages | Wonkhe

    Is there something going wrong with large surveys?

    We asked a bunch of people but they didn’t answer. That’s been the story of the Labour Force Survey (LFS) and the Annual Population Survey (APS) – two venerable fixtures in the Office for National Statistics (ONS) arsenal of data collections.

    Both have just lost their accreditation as official statistics. A statement from the Office for Statistical Regulation highlights just how much of the data we use to understand the world around us is at risk as a result: statistics about employment are affected by the LFS concerns, whereas APS covers everything from regional labour markets, to household income, to basic stuff about the population of the UK by nationality. These are huge, fundamental, sources of information on the way people work and live.

    The LFS response rate has historically been around 50 per cent, but it had fallen to 40 per cent by 2020 and is now below 20 per cent. The APS is an additional sample using the LFS approach – current advice suggests that response rates have deteriorated to the extent that it is no longer safe to use APS data at local authority level (the resolution it was designed to be used at).

    What’s going on?

    With so much of our understanding of social policy issues coming through survey data, problems like these feel almost existential in scope. Online survey tools have made it easier to design and conduct surveys – and often design in the kind of good survey development practices that used to be the domain of specialists. Theoretically, it should be easier to run good quality surveys than ever before – certainly we see more of them (we even run them ourselves).

    Is it simply a matter of survey fatigue? Or are people less likely to (less willing to?) give information to researchers for reasons of trust?

    In our world of higher education, we have recently seen the Graduate Outcomes response rate drop below 50 per cent for the first time, casting doubt as to its suitability as a regulatory measure. The survey still has accredited official statistics status, and there has been important work done on understanding the impact of non-response bias – but it is a concerning trend. The national student survey (NSS) is an outlier here – it has a 72 per cent response rate last time round (so you can be fairly confident in validity right down to course level), but it does enjoy an unusually good level of survey population awareness even despite the removal of a requirement for providers to promote the survey to students. And of course, many of the more egregious issues with HESA Student have been founded on student characteristics – the kind of thing gathered during enrollment or entry surveys.

    A survey of the literature

    There is a literature on survey response rates in published research. A meta-analysis by Wu et al (Computers in Human Behavior, 2022) found that, at this point, the average online survey result was 44.1 per cent – finding benefits for using (as NSS does) a clearly defined and refined population, pre-contacting participants, and using reminders. A smaller study by Diaker et al (Journal of Survey Statistics and Methodology, 2020) found that, in general, online surveys yield lower response rates (on average, 12 percentage point lower) than other approaches.

    Interestingly, Holton et al (Human Relations, 2022) show an increase in response rates over time in a sample of 1014 journals, and do not find a statistically significant difference linked to survey modes.

    ONS itself works with the ESRC-funded Survey Futures project, which:

    aims to deliver a step change in survey research to ensure that it will remain possible in the UK to carry out high quality social surveys of the kinds required by the public and academic sectors to monitor and understand society, and to provide an evidence base for policy

    It feels like timely stuff. Nine strands of work in the first phase included work on mode effects, and on addressing non-response.

    Fixing surveys

    ONS have been taking steps to repair LFS – implementing some of the recontacting/reminder approaches that have been successfully implemented and documented in the academic literature. There’s a renewed focus on households that include young people, and a return to the larger sample sizes we saw during the pandemic (when the whole survey had to be conducted remotely). Reweighting has led to a bunch of tweaks to the way samples are chosen, and non-responses accounted for.

    Longer term, the Transformed Labour Force Survey (TLFS) is already being trialed, though the initial March 2024 plans for full introduction has been revised to allow for further testing – important given a bias towards older age group responses, and an increased level of partial responses. Yes, there’s a lessons learned review. The old LFS and the new, online first, TLFS will be running together at least until early 2025 – with a knock on impact on APS.

    But it is worth bearing in mind that, even given the changes made to drive up responses, trial TLFS response rates have been hovering around just below 40 per cent. This is a return to 2020 levels, addressing some of the recent damage, but a long way from the historic norm.

    Survey fatigue

    More usually the term “survey fatigue” is used to describe the impact of additional questions on completion rate – respondents tire during long surveys (as Jeong et al observe in the Journal of Development Economics) and deliberately choose not to answer questions to hasten the end of the survey.

    But it is possible to consider the idea of a civilisational survey fatigue. Arguably, large parts of the online economy are propped up on the collection and reuse of personal data, which can then be used to target advertisements and reminders. Increasingly, you now have to pay to opt out of targeted ads on websites – assuming you can view the website at all without paying. After a period of abeyance, concerns around data privacy are beginning to reemerge. Forms of social media that rely on a constant drive to share personal information are unexpectedly beginning to struggle – for younger generations participatory social media is more likely to be a group chat or discord server, while formerly participatory services like YouTube and TikTok have become platforms for media consumption.

    In the world of public opinion research the struggle with response rates has partially been met via a switch from randomised phone or in-person to the use of pre-vetted online panels. This (as with the rise of focus groups) has generated a new cadre of “professional respondents” – with huge implications for the validity of polling even when weighting is applied.

    Governments and industry are moving towards administrative data – the most recognisable example in higher education being the LEO dataset of graduate salaries. But this brings problems in itself – LEO lets us know how much income graduates pay tax on from their main job, but deals poorly with the portfolio careers that are the expectation of many graduates. LEO never cut it as a policymaking tool precisely because of how broadbrush it is.

    In a world where everything is data driven, what happens when the quality of data drops? If we were ever making good, data-driven decisions, a problem with the raw material suggests a problem with the end product. There are methodological and statistical workarounds, but the trend appears to be shifting away from people being happy to give out personal information without compensation. User interaction data – the traces we create as we interact with everything from ecommerce to online learning – are for now unaffected, but are necessarily limited in scope and explanatory value.

    We’ve lived through a generation where data seemed unlimited. What tools do we need to survive a data dark age?

    Source link