>> Back to video
NSW Premier's Debating Challenge 2018 - Years 9 and 10 final
ANDREW LASAITIS: My name is Andrew Lasaitis. I'm the speaking competition's officer at the Arts Unit for the Department of Education. I'd like to acknowledge the Gadigal People of the Eora Nation, who are the traditional custodians of this land. I would also like to pay respect to elders past, present and emerging and extend that respect to other Aboriginals who are present today.
Our chairperson is Amy Koralis, and our time keeper is Vanessa Li. They both attend St. George Girls High School. I will now hand it over to Amy to commence today's state final. Thank you.
AMY KORALIS: Thank you, Mr. Lasaitis. I welcome you to the 2018 State Final of the Premier's Debating Challenge for years 9 and 10 for the Teasdale Trophy. Today's debate is between Colo High School and Hornsby Girls High School. Colo High won the competition in 2001 and 2003, while Hornsby Girls High won the Teasdale Trophy once, back in 1957.
The affirmative team from Hornsby Girls High School is first speaker, Emma Hancock, second speaker, Kaitlyn Kong, third speaker, Karina Mathias. Fourth speaker, Charlotte Barry. The negative team from Colo High School is first speaker Clare Adamson, second speaker Bailey Langham, third speaker Isaak Salami, fourth speaker, Noah Vanderburg. The adjudicators are Tony Davey, Thomas Shortridge, and Lloyd Cameron.
Each speaker may speak for eight minutes. There will be a warning bell at six minutes, with two bells at eight minutes to indicate that a speaker's time has expired. A bell will be rung continuously if the speaker exceeds the maximum time by more than one minute. The topic for this debate is that schools should have the power to read any and all of their students' online posts, messages, and emails. The first affirmative speaker, Emma Hancock, will begin the debate.
EMMA HANCOCK: Our government has a responsibility to prevent harms in our society and to protect the vulnerable against these harms. The rise and increased use of the internet has meant that more people are being targeted and being involved in these crimes, particularly with young people being especially prevalent, due to their use of technology, the social pressures that they face, and their vulnerability to manipulation. It is only by monitoring online interactions that we can protect these individuals against these harms and prevent criminal actions in their later life. Hence, we propose our model.
Beginning next year, an independent body will be established with regional liaison officers who will use previously existing algorithm systems to philtre students' online posts, messages, and emails for any material pertaining to bullying, harassment, terrorism, pornography, or any other criminal activity. Schools will be notified if students are discovered to be participating in such acts. And schools may also request a thorough examination if they believe that certain acts are occurring within their walls, in which case, the body will assess said students and provide appropriate findings and evidence to the school so action may be pursued.
So yes. We acknowledge that some may perceive this policy as an invasion of privacy. However, we maintain that the privacy lost by the assessment of wrongdoing students picked up by algorithms is significantly outweighed by our obligation to protect the majority, the innocent and the victims of these crimes. Furthermore, we would like to draw a notice to the implementation of this system, which is already in place for Department of Education emails. And we see this model as simply making this system, which has already been proved to be effective, more effective and more widespread, thus protecting more innocent victims.
So today I will be discussing A, the impact of our programme on both current and future actions, and B, this impact on terrorism. My second will discuss the impact of our model on crime and bullying. So firstly, let us discuss how and why our model will prevent these harmful actions from occurring, both now and into the future.
Psychology plays an important role in crime. People who think that they are going to be caught or have a greater likelihood of being found out are significantly less likely to commit crimes, or in most cases, simply will not commit this crime. So this is why we see less crime in areas that are frequently patrolled by police or have high amounts of security cameras. And our model creates the similar impression of surveillance, hence discouraging these activities from occurring.
So let us look at the demographics that we will affect. So let's discuss demographic A, people who strongly desire to commit a crime, often serious crimes, and secondly, demographic B, people who are pushed by social pressure to partake in offences like bullying, drug use, or sending nude images of themselves or others.
Group A have little heed for the law, meaning that under our model, they will be forced to find other means to organise said actions, as they will want to continue these actions anyway. But because we are cutting out the use of online media, this will push their communications above ground and making them-- forcing them to occur in person. Meaning that they are easier to be discovered or tracked by police or other authorities, and they can no longer use these anonymous identities. Meaning that once again, we will-- it will be easier to monitor their activities.
So our second group, B, is a larger demographic, and hence we target them specifically under our model. So these are people who do not necessarily have a strong desire to commit crime, but due to these social pressures, they may partake in some of these activities. But due to the increasing feeling of surveillance under our model, the awareness of consequences, and the likelihood of being caught, all of these factors reduce their susceptibility to being pressured into these acts.
So we have established that as said, acts are less likely to occur, or they will be pushed above ground under our model, but how will we see these impacts in the long term? So students who see less crime in their youth are less likely to partake in these same actions or support these same actions in their later life, as it does not have the normalisation that it does now. And they will also be more aware of authority and resulting consequences of these actions under our model.
So this will also protect individuals from actions that they will typically regret in later life. So for example, reducing rights of individuals who are labelled as registered sex offenders, for life, after exchanging sexual photographs in their youth. So by preventing-- by discouraging them from doing such an act, we are preventing them from having this lifelong stigma, this lifelong humiliation, the lost employment opportunities, and everything else that comes along with partaking in these acts.
So among this crime, we will see similar things occurring with other crimes, once again establishing that we are no longer reducing the employment opportunities and creating this lifelong stigma that heavily affects these individuals. So by doing so, we are hence protecting the youth in their vulnerable state, now, so that they have a better future, where they are being protected, and crime is no longer being normalised.
So onto a second note. So modern terrorism has seen an emergence in a very different nature to the past. So the planning of these attacks occurs predominantly on a large degree in online platforms. So why is this occurring? For three reasons.
So we are seeing an increase in multiple individuals being involved in these attacks, as we've seen in mass shootings in America. And for a second reason. Particularly students are susceptible to this style of--
--multiple people being involved in an attack, as they are not only heavily involved socially with each other, there's this strong creation of categorization and groups of people, but they are also heavily dependent on online media.
And for a third reason. We are also seeing a rise of overseas organisations targeting vulnerable people in our country to carry out these acts. And obviously, these people cannot interact face to face with students in our country, and hence, online media is used to perpetuate these terrorist attacks.
So the basic nature of messaging, emailing, an online posts is allowing this terrorism to occur and allowing this fear to perpetuate in our nation. So how will this protect these acts of terrorism or the likelihood of these acts? By implementing this model, we will, as established, have an effect on demographics A and B through this psychological power creating this feeling of surveillance.
Or if there are students who are particularly desiring to carry out these acts, then we are forcing them to discuss these things together, out loud, in person. And particularly if these students are going to school together, it's going to be significantly easier for teachers or principals or other forms of authority to identify that something is happening and that something is being discussed that needs to be looked at.
So an example is recently, New South Wales Police have stopped major plans for terrorist actions through monitoring online media. So one may argue that this only occurred because these were suspected individuals. But I would like to point to the example of the high school boy who shot a policeman outside Parramatta Station. A high school student.
And we argue that if students have the capacity to do these acts and due to this feeling of categorisation and mateship and people who have similar ideas to you in your school, we are likely to see communication between multiple persons or multiple parties--
--in planning these acts, and we can exclusively target this by monitoring online media, as I've discussed, as students are increasingly dependent on it.
So this is the imperative that we see to stop these acts from occurring, to protect our innocent demographics and our victims from having these horrific events occurring and derailing their lives potentially forever. So we as the Affirmative Team see the importance of protecting the wider population over these minor breaches in privacy.
And I will-- just to link back to my model, we are only infringing on the privacy of students who are being assessed by algorithms to be distributing or being involved in material that is potentially harmful, that is potentially posing a threat to both them, their peers, or the wider society. And hence, we are not infringing on the privacy of innocent people and of victims. And we believe that if you are going to partake in criminal activity, you are sacrificing your right to that privacy, and thus, we are proud to affirm.
AMY KORALIS: The first negative speaker, Clare Adamson, will begin their case.
CLARE ADAMSON: To start off this debate as the first negative, imagine a future in 10 years' time. A year 12 student has finally finished their HSC exams. They leave the gates of their school for the last time, celebrating. This student isn't just celebrating because they finished their exams. This student is celebrating because for the first time in 13 years, their every online movement isn't being monitored by their school.
13 years for every student who gets an education in New South Wales. 13 years of being monitored, of constantly having to think about their reputation, having to think of the Big Brother constantly reading their posts, their status updates, their texts, their private messages, and their emails. Is this a future that you want for your children or your peers?
As the first speaker of the Negative Team, I would like to open our case by rebutting the Affirmative's points. They've stated that the government has a responsibility to protect society from the harms that they cause and especially to protect the vulnerable groups of society. And they've used the example of children.
Now, if we're just reading the emails and the messages and the posts of children in this group, why are we not doing it for other vulnerable groups? Why aren't we doing it for perhaps the elderly as well? This is a flaw in their case. They've stated that their model will be implemented next year for an independent body, and they will use algorithms to detect who is at risk of terrorism, who is at risk of harming themselves, et cetera.
We believe that their model should be completely rejected, as it goes against the topic of the debate, that schools should be reading emails and messages of their groups. We believe that to have an independent body be reading their emails, it goes against what the topic says, as it is not the school that is reading the email, it is the independent body reading the email.
They've stated that the obligation to protect the innocent outweighs privacy concerns. And they've also gone on to state that teachers' emails can be read. Teachers choose to consent for their Department of Education job. They've chosen to consent to this. Students cannot choose. Students have to be educated.
They've stated that there are two demographics, demographic A, who choose crime, and demographic B, who are pressured into crime. What about demographic C, the innocent, the majority of students?
They've stated that people who are caught in youth are less likely to reoffend. This is false. Because people who are caught and perhaps imprisoned for their crimes in their youth, they're more likely to go in to reoffend. They're more likely to commit more serious crimes. They've stated that terrorism planning occurs online.
Most of Australia's terrorist attacks, most of their crimes, are not committed by youth. For example, the Lindt Cafe siege, a recent terrorist attack, Man Haron Monis was not a student. He was not a youth. There is no reason this could have been prevented by surveilling the youth.
They've stated that in America, school shootings and other incidents could be prevented by this. American school-style school shootings, they don't occur here. We have specific gun laws, and we have specific other laws in place so that students don't have the access to these weapons. And there is not a culture here of that kind of attacks.
The affirmative is treating all students as criminals, and there are already systems in place to make sure that students get the best of their education. They didn't define the scope, and they didn't define schools, so we will do that now. We're defining schools as both primary and secondary schools. So all students, from kindergarten to year 12, in both public and private schools. This includes all New South Wales schools.
The people who are able to read these messages will be teachers, principals, school counsellors, career advisors, et cetera. As it says in the topic, schools should be able to read their students' emails, not an independent advisory board, as the Affirmative has stated. And we reject their model and definition for the same reason.
Online posts, messages, and emails, such as online posts can be anything on Instagram, Facebook, Reddit, Twitter, et cetera. Messages can be private messages on these apps or on others, as well as texts associated with a specific phone number. Emails can also be private and school emails. So online posts can include posts for private consumption and for public consumption.
So private, someone in a group, or an account with privacy settings, people in the public, just looking at their profile, can't view it. And this also includes public consumption, so anyone can view these posts. Private messages intended for groups and individuals are sensitive data. These can be read under the Affirmative's model.
For reading emails, school emails and private emails. School emails can include to and from teachers about work, career advisors about potential apprenticeships and work experience, et cetera. Private email can be to and from work, friends, hobbies, and I'll go on to elaborate about privacy in my first point. We've defined the scope as New South Wales Schools formed under the Department of Education. And onto my allocation.
My first point is privacy and sensitive data and the potential for misuse and abuse by schools. And my second speakers' points are it limits freedom of expression and cultivates a negative student-school relationship.
And onto my first point, privacy concerns and sensitive data. Privacy concerns. In the classroom, under this topic, any school staff, such as classroom teachers, school counsellors, principals, head teachers, et cetera, would be able to read any student in the school's e-mails. Their messages, their emails, their online posts, as we have already defined. A lot of this information--
--is not necessary to the day-to-day running of schools. It is not necessary to a student's well-being. For example, a student's personal issues. Maybe they just broke up with their boyfriend. Maybe they have a certain medical history that they don't want the school to know about. Maybe they have hobbies that is not necessary or important for the school to know. Maybe a relative has had a serious injury or has died. And unless the student or parent chooses to inform the school, it is not something they need to know.
The impact on students and staff. My second speaker will elaborate as to the relationship that is negatively affected by the school teachers and students. It takes away choice for students and parents to tell the school on their own terms.
Privacy. The government can't read adult and elderly. These are also vulnerable groups. For example, adults that perhaps have a mental disability or adults that are elderly and need looking after their own affairs. In addition to this, the Affirmative has stated that terrorism is a big contributor as to why they are reading this. Under these privacy rules, would this lead to further surveillance of the population? Will it be expanded afterwards to not just students, as why not?
To conclude my first point, under this topic, privacy concerns and sensitive data would be breached by these regulations. And onto my second point, the potential misuse and abuse of powers by schools. If teachers are reading their students' emails, their personal messages, their online posts, will these teachers be able to cope with the extra work to find time outside of work to be able to read this? Will they require special training? Who will provide this training? All of these are questions that the Affirmative has perhaps not thought about. The potential misuse and abuse.
For example, as we've defined the topic as public and private schools, the ability for discrimination against, perhaps, gay students in private schools. This is something that is both outlawed under the Gender Discrimination Act, but is also allowed with private schools. So will they potential abuse-- will they have the abuse of powers by teachers to say, oh, to find something out about their students that can be used against them in a negative way.
Another example is perhaps-- and this is a very small population it hardly ever occurs-- but predatory teachers. For example, a student makes plans to meet their friend at a specific time and place. A teacher could read this, or perhaps the school counsellor could read this, and you can go from there. It can have very negative consequences.
The misuse and abuse of powers by schools can lead to very negative consequences for students. And in conclusion, this is why we believe--
[bell constantly ringing]
--that they should most definitely not-- we are proud to negate. Thank you.
AMY KORALIS: The second affirmative speaker, Kaitlyn Kong, will continue their case.
KAITLYN KONG: As online communication becomes increasingly popular, people with the intent to harm others are allowed to retain anonymous online. By implementing our model, we protect those who are at risk to harm through online avenues, with only a minor breach of privacy to those infringing the rights of others. Today I will discuss the planning of crime online and bullying in schools, but first I would like to respond to the opposition's case.
First I would like to reclarify on our model and say that we believe that it does follow the topic due to the fact that schools are part of the wider government education system. And we plan to institute a government liaison officer at every school, within every school, who will set up algorithms and analyse these results. Therefore, this occurs within the same sphere and inhibits teachers from reading what students are saying. Rather, this officer is the-- only able to provide information about students in which there's evidence that students are committing crimes and doing the wrong thing or when the algorithm has pulled up something problematic.
And teachers, we'd like to emphasise that teachers will not be allowed to simply find out what specific students are doing, only if a student has been exhibiting problematic behaviour that could be criminal or if the algorithm has pulled up something that clearly shows they intend to harm others.
They've also spoken about how this is a breach of privacy because people are constantly being monitored. However, again, they will not be constantly monitored. An algorithm will monitor them. An algorithm is not a person that will be spreading this information to everyone and leaking their information. It will only be if there is problematic behaviour that people will be able to see this. And it will not be everything they've done. If one student has done something wrong, only the things that they've done wrong will show up in the algorithm, not every single thing that they've ever done online.
They've also spoken about how if we get students in trouble for things that they're doing online, and then they go to prison, they're more likely to recommit. However, we'd like to say that if students are getting away with crimes, are they then not more likely to recommit when they've not been punished or not even been told that people are aware of what they're doing? If students are allowed to get away with crimes, we think that that is even more incentive for them to recommit these crimes.
And also we'd like to mention that prison is not the only form of punishment. If students are doing something wrong, they can get detentions if it's something minor, or do things like community service or therapy, and prison is not the only way for students to be punished if they're not doing something that's majorly criminal and illegal.
And also we'd like to emphasise that the government's role for justice is incredibly important, and that we need to be using these algorithms to get people in trouble for things so that they do not reoffend, because they know that they will be caught for things. And this will not only prevent them from doing things in the first place, but get them in trouble when they have done the wrong thing.
They've also spoken about discrimination against particular have students. However, we'd like to refer that back to the algorithm, in that they can't-- if there is a teacher who is particularly homophobic, they cannot request information on a certain student or get a certain student in trouble unless the algorithm's picked it up, or this gay student has inhibited the same types of harms that other straight students may have.
They've also spoken about how we are only implementing this on the youth. However, we-- and why not do it to the elderly? And if we've done this now, why not bring this out to a sphere of the entire population? However, we'd like to say that we are starting with youth because youth have high rates of crime, especially when they know they can get away with it, and especially because youth are using online forums more than older people. But we are also not opposed to this algorithm becoming mainstream and impacting all people in Australia and not just students. But for now, that is what we are discussing.
And then also we'd like to say that elderly people don't have the same rates of crime that young people do, and we are preventing school students from partaking in crime, and this also will prevent them from partaking in the future. Because if a student starts committing crimes in school, they are probably more likely to continue this behaviour later on. Whereas if you stop it at its root in high school, then they are less likely to partake in crime later on.
And they've also spoken about how the Department of Education is monitoring teachers, and that teachers are consenting to this. However, we'd like to point out that it's not specific to teachers. All people, students and teachers alike, are monitored by the Department of Education through their school emails and school accounts. And if students aren't even aware of this, this clearly shows how this is a minor infringement, if students aren't even aware of the fact that they are being monitored.
In terms of this taking away-- and they've also spoken about how sensitive information can be leaked to teachers. However, again, it's not like teachers can just look on some kid's text messages and see if their relatives have died or if they're going through a really difficult time. Things will only be pulled up on the algorithm if they are showing that such student has been exhibiting concerning behaviour. And that teachers are not going to have access to this.
Now onto my case. Today I will address the questions. Take I will look at crime first, and I will look at the questions of number one, how students use online forums to involve themselves in crimes and how they retain their anonymity this way and why this is a problem and then also how our model will stop this.
So within my first question, crime within youth is becoming increasingly digital. And with the ease of the internet, it makes more sense to liaise online without even having to take a step out of their bedrooms. Student balls themselves in crimes such as the distribution of illegal drugs, robberies, nude images, and inappropriate teacher-student relations will often use digital means to discuss these actions. And this allows for a certain element of anonymity, because they are hiding behind their screens, and they know that it is very hard for them to be held accountable for their actions.
For example, if students were planning to rob a petrol station and discussed it online, no one would even be aware of the fact that it's happening until it actually happened, because it is basically impossible to have a police presence everywhere, and even if it was, reactionary action is not enough, and we need preventative legislation that will stop students from committing crimes before they've even done it.
And why this is a problem. When students are able to discuss the enactment of crimes online without any kind of algorithm, it is impossible to catch and punish them. For example, students who are dealing or purchasing illegal drugs can discuss it online. And since no one can see these messages, they can do a drop off with very little risk that anyone even knows what is happening. Digital forums allow crime to occur very discreetly and reduce the risk of arrest and further legal action, even when people are committing crimes.
Now my next question. How does the model stop this. If students are aware of the fact that their online activity and communication is being monitored, they are much less likely to use these avenues, which forces them to offline avenues, which makes it so much harder and less convenient for them to actually discuss crime, and therefore more difficult to enact it. And for those that would continue to discuss committing crimes online, it catches them at a much earlier stage.
Our model also prevents the spread of pornographic images of students and works to stop inappropriate relations between students and teachers by taking away their avenue to conduct their relationship or holding them accountable when it does indeed happen. Sexual relationships between students and teachers are able to occur because of technology, and it acts as an enabler. And we see these relationships occurring for long periods of time without a legal action.
For example, there have been multiple arrests of Knox and Sydney grammar teachers for their illegal relationships with teenagers, but we'd like to point out that it took decades for any legal action to occur. If we had an algorithm system that actually monitored what these students were doing, we would be able to get them in trouble much earlier and stop it before more harm was able to occur. And thus with monitoring and preventative measures and also post-crime legal action, this will eradicate crime and ultimately make our society and students much safer.
Next I will look at bullying and look at two questions, which are what does bullying look like currently, and what will our model do to eradicate this. So first of all, looking at bullying at the moment. Bullying has morphed into this constant accessible culture which we see in schools.
And because of the immense amount of technology at our fingertips, this same technology which is central to society is used as a vehicle for bullying. And it means that bullying does not just happen within schools. It happens 24/7, when students are at home, when they're waking up in the morning, and where they're at school.
And people are constantly online, which means that there is mass accessibility to people. And this ability to contact people constantly drastically magnifies the issue of bullying, which is a serious and widespread issue. And because of the potential anonymity that bullies have online and they can hide behind, this issue is unpreventable and ultimately unpreventable without the institution of monitoring online.
Next, looking at what our model will do to address this issue. Our model will affect a few different groups. So the first one is preventing borderline abusers. So these are people who are uncertain about bullying. They don't really want to bully other people, but they can probably be talked out of it. So we are protecting these people from themselves by creating doubt and fear of monitoring and ultimately dissuading them from bullying.
People who are mean to other people on a whim or who are peer pressured into bullying will be stopped by this because they can be easily talked out of it. And the algorithm essentially works as a scare tactic, because they don't want to get in trouble for it. So if they're not that passionate about this, they will stop bullying other students.
For example, if you think that a police officer is watching you, you wouldn't pull a gun on you. In that same way, students who would bully other students online, if they know they are being monitored, they will not do this, and they are less likely to be caught. And then if they don't stop, they will be caught much easier.
And then the next demographic this impacts is silence victims.
[bell constantly ringing]
So we, as the Affirmative Team, understand how difficult it is to report violence, and that is why we [inaudible]. Thank you [inaudible].
AMY KORALIS: The second negative speaker, Bailey Langham, will continue their case.
BAILEY LANGHAM: We, as the Negative Team, are in strong disagreement with the proposed topic that schools should have the power to read any and all of their students' online posts, messages, and emails. We hold this view due to the fortifying array of reasonings that are in favour of keeping the status quo. Our first speaker has already touched on how implementing this model will inhibit students' privacy and impact on student well-being. I will be explaining how the implementation of the model will have a negative effect on student-school relationships as well as limiting freedom of expression between students.
So the opposition has already highlighted that it will deter future crime if youth are caught in the early stages. We fully believe that this is true, and we agree that addressing issues at young ages does help to deter crimes. However, there are already more than adequate methods of online monitoring that are efficient and do not involve widespread monitoring by large amounts of people that are not necessarily properly qualified to do so. There are already liaison officers that work closely with schools, and we believe that this is already an appropriate method that's working currently, so we don't believe that there is any reason to change the status quo.
We'd also like to ask how this algorithm is helpful. Because algorithms could lead to wrongful convictions, and these algorithms may not necessarily be accurate. So we don't see how algorithms are a solution to the problem currently. They cannot identify students in crime if they don't use emails, online posts, and messages. Why is it limited to these platforms? There are vast other arrays of platforms, such as video chatting and others like that. So it's very specific in the type of media that it is monitoring. So we believe that that doesn't help.
And they have gone on to state that it protects the vulnerable. However, it cannot identify the vulnerable students if the students aren't actually using these specific emails, online posts, and messages. And also you'll see a definite rise in the number of media platforms that are going to arise that are harder to track and harder to regulate if students aren't allowed to use very accessible forms that are easily regulated at the moment.
So they have gone on to discuss that students aren't even aware of being monitored at the moment. So we believe that the monitoring that is there at the moment, and this form, where teachers and parents can notify police liaison officers, and then they can address the issue, is very effective. So we don't see any reason to change this.
And they have gone on to bring forth the idea of preventative legislation that already a wide array of agencies have. And they have said that if students know that they are being monitored, that they just won't do it, or that it will become harder for them to do. However, instead, they'll just use platforms that are harder to regulate and identify, therefore worsening the issue.
So onto the second part of bullying. And they have said that it's a more accessible culture, and that it's a vehicle for bullying, and people are constantly online. We believe there are already help lines, counsellors, at schools, and police liaison officers, and a vast array of different avenues are already adequate for what is occurring at the moment. So there's no reason to change this.
They've said that it prevents borderline bullying and creates doubt and fear, which is a scare tactic. Should we not be-- should schools not be educating people on bullying rather than instilling fear in people? We believe that this completely goes against the purpose of school and their core values.
They have said that schools should have the power to read any and all of students' online posts, messages, and emails. Read any and all? Not selective? Schools should have the power? So the Department of Education is not schools. It is the organisation that is in charge of schools, not schools directly.
So we'd like to say how is this independent body going to be able to monitor messages on certain platforms? Some messaging services are completely anonymous. For example, Telegram, or aren't saved saved, and deleted within seconds, such as Snapchat. So how are schools going to read these messages?
So what's stopping students from installing applications for privacy, such as VPNs, that will then prevent these liaison officers from accessing this information? And there are more efficient authorities that handle these cultures currently and messages. So therefore it is irrelevant to change the status quo.
So onto my substantive. Firstly, it will negatively effect on student-school relationships. The whole idea of schools overseeing private correspondence via online by students takes the element of trust away that schools currently have. These relationships between students and schools are imperative in youth development.
Teaching, making positive connections in an adult and professional learning environment requires a lot of trust in the relationship. By implementing the opposition's model, this will eliminate trust. So students are less likely to communicate online about problems, where they usually feel safe, because it is generally a safe medium.
So it is harder for others to have awareness for their peers' situations, and it is far less likely to be used for helping to fix problems, because people are unaware. It takes away the point of being a private medium. And school should be a safe, trusting place like it is in the status quo.
School should not become a place of distrust and the feeling of being controlled and watched, which would happen if the Affirmative implemented their model. Again, the direct consequences of this implementation will be the rapid development of new online mediums that are more difficult to regulate and keep people safe, so it is vital to avoid this.
With a lack of trust between students and schools, subsequently, students won't feel safe. They'll become less engaged and participate less. This avoids the purpose of school, to prepare young people and students for the future, making them well-rounded individuals in a safe environment.
It allows for the skills of independence to develop in the status quo. So therefore, we should not have negative impacts on student-school relationships, as we currently have positive. Additionally, it limits the freedom of expression. So humans are a social and expressive species. In our modern, advancing world, online is just another platform for expression of oneself. So students should not be monitored online by schools, as things can be taken out of contextual meanings and will lead to wrongful--
--conviction. And schools should not be-- students not be restricted by school policy. Because some school policies may not align with societal values and student values and morals. So why should they be forced to conform to what the school believes, when schools should in fact be teaching students have their own individual opinions. So they are meant to prepare students for the future and not everything-- and where not everything is controlled by-- controlled, and people have their own opinion. So students should not be paranoid about schools reading and viewing their own sensitive information.
Schools should feel freedom and like everything, life is not controlled. So we should be teaching this. We are a free country with speech free speech, and we should definitely exercise this by keeping the status quo in our education system.
So conclusively, we believe that we should not--
[bell ringing constantly]
--schools should not have the ability to regulate emails, online posts, and messages.
AMY KORALIS: The third affirmative speaker, Karina Mathias, will conclude their case.
KARINA MATHIAS: Before we continue to the two main issues that today's debate has come down to, we would like to point out some issues brought up with the model and how this is all going to work. To begin with, we would like to outline the nature of technology as it stands, and that is that everything is trackable. Anything you do online is trackable. There isn't this whole sort of Snapchat deletes messages within two seconds, and you can never find it again. That information is retained for a minimum of two years. Therefore, by running these algorithms, even if the Snapchat is deleted on your phone, all that information is still kept and can be accessed if necessary.
Furthermore, the opposition brought up that why is it limited to the technology outlined in the topic. And in response, the Affirmative believes that since this imperative stands, we will just extend this model to the other technologies that exist, like video calling, which the opposition brought up as an example.
Furthermore, there's this whole issue that is surrounding the model that, as the Affirmative Team, we have brought forward. And we would like to outline that with this whole independent liaison situation, it is actually a liaison within the school, sent from the Department of Education, which exists in the same sphere of schools as a whole.
Furthermore, the Department of Education will set these outlines as to what should be picked up as dangerous activity or what could potentially lead to harmful activity. So that whole issue about individual schools and private schools, morals, and thoughts being perpetuated onto schools' creative freedom does not stand.
Now onto the two main issues of today's debate. There's the issue of privacy versus protection and also how will this information be used. All of these require this main characterisation as brought up by the first speaker of the affirmative team.
This takes into account four main demographics. Demographic A, the wrongdoers. The people that will commit these kind of hardcore crimes, regardless of what occurs. Demographic B, the large population of people who are actually really easily swayed by their peers, by what's going on around them, and this whole anonymous nature of online technology that allows them to commit these actions. Demographic C, the innocent majority. The majority that doesn't actually do anything and don't partake in any kind of online criminal behaviour. And this demographic isn't really affected at all. And now demographic D, the demographic that are the victims of this kind of online tyranny and what comes out of using technology to commit further crimes and other illicit actions.
Onto these two main issues. The first issue, privacy versus protection, which we see as the main issue of today's debate. The first speaker of the opposition brought up this whole point about privacy and the use of sensitive information and that people might have sensitive information like illnesses and whether they broke up with their boyfriend, and this will make them feel uncomfortable knowing that the Big Brother is watching.
The Affirmative Team actually places a priority on protecting students from themselves, this being deterring them at a young age from committing these crimes, so that they can have this long, full life where they are open to all of these job opportunities because they didn't make that silly mistake as a result of peer pressure online, by passing on illicit nude pictures to somebody else, listing them as a child offender. We're looking at protecting their long life over this maybe small breach of privacy. But as stated in the model, the school and most people won't even go into this detail about what illnesses you have, the messages you send to your boyfriend. That doesn't pop up in the algorithm sphere.
Furthermore, the Affirmative places priority on protecting innocents from being affected through terrorist attacks that might occur. We're seeing school students as being highly vulnerable to being swayed by overseas organisations. And these terrorist attacks that do occur, they wreak havoc on people's families that are affected by these attacks.
The father that was killed who was also a police officer in the Parramatta shooting, the people affected by that were his children, his wife, the people that surrounded him, the people that knew that boy who went to school. We are looking at protecting all of those individuals by implementing this policy. And we believe that this is at a higher standard than this minor breach of privacy.
Furthermore, as our second speaker spoke about, we believe in protecting silence to victims. We, as the Affirmative Team, recognise how tough it is to actually speak out about what occurs to you and what incidents you might face throughout your life, and that this might never even come out in the course of your life, what you've been through. And by having this algorithm there, we're not only protecting them from having to come forward and then face repercussions for coming forward and people judging them on how accurate they are with their convictions, we're also protecting the whistleblowers, the people who believe that there's something wrong, but every time they say something, people come after them for speaking up as to what occurred.
And the status quo, which the opposition believes so strongly in, we feel like it is not doing enough to protect these people. There is still bullying cases that occur. We believe that by having just a liaison officer from the police force coming, people don't come to talk to them about the bullying that they undergo. There is so much more that happens within our school and society that affects children a young age that we don't-- that we believe the status quo doesn't actually protect. And now with modern technology, it is happening at a higher rate and at a rate that people can't control at a higher pace, which is why there needs to be this adaptation to how we protect them by implementing the model, which us, as the Affirmative, put forward.
Furthermore, the opposition created this issue under privacy versus protection about how these algorithms actually work. These algorithms take place everywhere.
Any time a suspect has been labelled a suspect, these algorithms are run on all of their technologies to see whether they are in association to these other events. We're just looking at bringing them here on a wider scale, which means that rather than having someone identified as a suspect due to whatever reasons they may have, this is just enjoying the wider protection and seeing a more holistic view and protecting everyone from the get go.
The opposition also brought up this whole issue about how people might use VPNs to avoid it. That's looking at demographic A, the people who will commit these actions anyway. And the algorithms that already are so strong that they do have the prevalence to be able to pick up on this kind of change of VPNs, et cetera. Furthermore, the issue about how there will be new media which will be harder to track, this falls under demographic B, who are unlikely to partake in additional methods or change their use of platform to commit a crime due to this whole feeling of a Big Brother watching.
Now, onto the next issue of the debate, which is a much minor issue. It's that how will this information be used. The opposition brought up the point of there'll be a negative school and student relationship. They also brought up this issue about how it might limit the freedom of your expression.
We're grouping these two points that they brought up together because we feel like both of them don't stand against the model as it currently stands, being that the schools and the teachers that you are surrounded by don't actually have access to the information, to the messages that you send. All of that only runs through the algorithmic process.
The model outlines that the government liaison will be the only person to access this information. And that even them accessing it, they don't see your texts and everything that you send. They only run it through the algorithm. So this information never really goes out unless there's a small chance of you being suspected by something.
Furthermore, there's this issue that if an identified student might be convicted when they're innocent due to their creative expression that might infringe on the government's morals. We believe that this will stand when this model occurs. That an identified student has the right to explain themselves before they are punished. No government is going to convict a student for a drawing without the student having the chance to explain themselves and saying the reason behind what they were doing.
If there is something that is so blatantly obvious that it is associated to a crime, that will obviously be convicted. But there's other stuff. There will always be stuff in the middle, where you don't really know where it lies. And then the student has the right to explain themselves and provide evidence to say that they are not part of any of this illegal actions.
Therefore, as previously stated, by government guidelines--
[bell ringing constantly]
--this is the same for all schools in New South Wales. Therefore, to protect our students and the future of them, we are proud to affirm.
AMY KORALIS: The third negative speaker, Isaak Salami, will conclude their case.
ISAAK SALAMI: Good afternoon. The topic of this debate has been discussing whether we should have schools check the messages, online posts, and emails of their students. So this debate has come down to discussing two main themes or ideas, the theme of privacy and the impact this change will have on students, and as an extension, society as a whole.
So I'd first like to focus on my first speaker of the negative team when discussing the issue of privacy. So they came out saying that this change will be an invasion of privacy. About how this will allow the Education Department to actually view the online correspondence of students and that can then later have an emotional effect on students. The Affirmative Team came back saying that oh no, they are going to have an algorithm that identifies certain-- I assume words-- that then sort of hint towards criminal activity.
But you still need to realise that they are still monitoring those messages. There is access to them. Then that opens up the opportunity for misuse, and then there's that sensitive information for being read and distributed. So that whole risk is still there. That also then more clarifies each point of our Negative Team, because that also rests on the whole idea that schools have access to what you're actually sending to people and what's online activity actually is.
So then our second speaker then continued this whole-- sorry. Our second speaker then went on to the next theme of the impact this change will have on students and society as a whole. They talked about how yes, this change can lead to the misuse of information by education employees, say the liaison or other school teachers who also have this information. They talked about how certain schools and certain-- like the state itself has its own agenda and has its own policies and ideas that it wants to promote, so then it will target students that may then be-- what's the word-- in conflict with that. And I'll use the example of Anglican schools and about not actually liking gay people and all that sort of stuff.
The second speaker then also continued this line of thought by talking about how this online monitoring will prove detrimental to the student-school relationship, as students won't really be able to trust their school and the Education Department as a whole anymore, and that leads to more negative consequences down the line.
Now I'd like to go on and focus on the Affirmative Team on this same issue of the impact on students. So the first speaker came out by saying that under their model, the schools will actually be notified if a student actually commits criminal activity. However, we'd just like to point out that criminal activity is already handled by authorities such as the police or the anti-terrorism agency, so we don't really see why schools need to be involved with this.
They keep coming back and saying that-- trying this idea that every school student is going out committing crimes every day of the week or something like that. Like it's a really big, prominent issue. But we need to realise that most people who actually commit crime are adults. Younger adults, true. But adults. A kindergarten kid isn't going to go down to the local bank and then blow it up and rob it sort of thing.
And we'd also like to justify that. Yes, our counter model, we actually stated that we are taking in your primary schools as well as your higher schools. So yes, you would be monitoring young kids. What's the point of monitoring young children who are way likely to commit a crime as, say, someone in their mid 20s, who actually are more likely to commit that crime?
The first Affirmative also talked about how by actually monitoring children at a young age, it deters them from crime at a rather young age, because they're scared of being monitored, and they're scared of being punished, and that leads to less crime in the future. And while yes, this is true, that less crime as a youth translates to being less likely to commit crimes in the future.
We'd also like to point out that there are authorities, such as the police, who take care of youth who actually commit crime and then try and prosecute them and deal with them in the court of law and using certain regulations already in place by the law, and this already does actually have some sort of positive effects. Maybe it doesn't stop everyone from becoming a criminal as they become older, but it does have a positive effect. You cannot just straight up dismiss that. And while it is unfortunate that some people do slip through the cracks, and then they go on to commit crimes further on, that's why we have prison.
I'd also like to continue on by focusing on the second Affirmative's point. They were talking about what this change will have on the whole spectrum of crime. They talked about how reactionary action is not enough, and that we should be preemptively stopping crimes before they even occur. For this they want to protect both society as a whole, because people aren't getting hurt from crimes, and also protecting these young schoolchildren, because that way that stops them from leading into a life of crime.
However, I'd like to point out that our current court of law is based on innocent until proven guilty. You actually have to prove someone did a particular crime to actually then prosecute them. This sort of muddies the water a little bit. Because just actually monitoring and identifying correspondence between students that hints towards a crime, they then become the subject of the authorities, such as the police.
So while they-- and the Affirmative Team came back and said, well then, yes, someone might not actually be committing the crimes, and they'll have a chance to actually explain themselves. But that still doesn't take away the fact that students who may not be able to explain themselves and said, yes, I was going to commit this crime, what's going to happen to them? Are they going to go to gaol? What are they going to go to gaol for? For talking about committing a crime? So you understand, this goes against the law system that we have place in our country.
The second Affirmative also talked about how this sort of system will stop bullying. The algorithm will somehow pick out phrases and terms that identify bullying, so then that allows schools to tackle bullying. But I'd also like to point out that bullying isn't exactly a crime. And they often talked about how the algorithm would pick out certain words that then relate to crime.
I'd also like to focus on the whole idea that bullying is a wide spectrum. It comes across a wide spectrum. So what words would you actually look for to identify bullying? You understand? Like, say, oh, I'm going to punch such-and-such on Monday, yes, you understand that's bullying. But then there's other more subtle things of saying maybe a joke that could come off as bullying, like maybe a racially offensive joke that's bad. But would you then go and punish someone for bullying because of making a joke?
Now I'd like to just come back to the negative team and focus on our third speaker's point about how this change will limit students' freedom of expression. As they've pointed out, that yes, we as humans are an emotionally expressive race. We actually like to express ourselves. And online actually provides a platform for you to express yourself. You actually get to voice your opinions and beliefs and actually get to express who you are as a person, right?
So the Affirmative team's whole algorithm then limits this by regulating what you can and can't say. And then you're being punished for saying things that the algorithm doesn't agree with. Well, yes, this might be good if you're saying I'm going to go kill this person. That can pick you up there and identify you.
There's other little subtle things. For example, saying that you identify as something that might be condemned by the state. And of course, by the state, I mean the government. So you see, just from understanding that, you understand that this is actually limiting our freedom of expression.
And yes, while certain forms of online motoring does happen in society, everyone generally agrees that invasion of privacy is a bad thing. The Affirmative keeps saying that invasion of privacy is OK because it makes you safer, but no one likes their private information being seen by someone else. No one likes knowing that someone is looking at their messages, even if it is for the greater good.
So if we know that invasion probably is a negative thing, why are we encouraging this in our schools? Why should schools, an institution that is about preparing students for the future, educating them, and creating well-rounded individuals, be making students scared of the omnipresent Big Brother? Thank you.
AMY KORALIS: A member of the adjudicating panel will now deliver the adjudication and announce the results of this debate.
THOMAS SHORTRIDGE: Hi. So on behalf of the adjudication panel, we want to once again congratulate both teams, for not only a great debate, but for first of all, making it so far in this competition. So join me in another round of applause.
So this was a really excellent debate. We would have one general point of feedback on behalf of the panel, which is saying that both teams, on occasion, need to be more grounded in the arguments they are making. So often there were ideas that seem a bit farcical or not quite realistically put at many points that I think undermined the credibility of both teams' arguments at points.
So for the Affirmative, that's about perhaps more of a discussion of bullying, which is more likely to occur than outright crimes, or on the Negative, more examples of the kinds of actions that students need to discuss that they would then be less likely to. Both of those arguments were made, but not made in the most realistic form. So in going forward, teams always need to consider what are the ways in which these are actually, on the ground, going to be occurring, and if this policy was actually done, how would that occur.
So with that in mind, we saw this coming down to two issues in what was a very close debate. So the first question then is is a meaningful subversion of privacy by this policy, and the second is will this prevent the crimes or bullyings or harms that the Affirmative seeks to prevent.
So the contention we get from the negative on this first issue is that this takes away the privacy of students across a number of years, and often this is sensitive information, things like medical records, their sexuality, or the problems that they're having in their love life. And then secondly, that there is some practical importance to privacy, like you need to be an expressive human and talk to everyone around you.
So we believe that privacy was important in this debate. The Affirmative says three things in response to that. The first is to point out that there's not an absolute right to privacy, and we are already infringed in it in cases some ways by checking students' e-mails, and also that we have restrictions on DET emails as well as students.
The second and better response is to say that there is a limited imposition on this because it's done by an algorithm. So while students might still somewhat fear that these problems-- or that their things are being checked, that was not the reality of most of it, which I think mitigated some of the privacy arguments in the negative.
And the third and best response was to say that this is less important than the prevention of crime. But that argument then relied on the Affirmative sufficiently proving that there was going to be a significant reduction in crime. Because if there was, then the harms to privacy were likely less important, as we heard from first Affirmative. But if the Affirmative did not sufficiently show that this crime would be prevented, then the harms of privacy that the Negative brought forward, we're going to take it.
So how did this affect the kinds of crimes and bullying and harms? The Affirmative, from the get go, says there are lots of important things to stop, things like terrorism, things like crime, that are being planned, or things like bullying. And that there is an overwhelming or predominant use of things like the internet to plan those things, and that we would stop two kinds of people, those who are really hardcore, and those who are socialised into it, and give awareness to these issues.
The Negative have three responses. The first is to say that there's a sufficient solution to those problems, things like police. But the Affirmative points out that that isn't sufficient, because people aren't going to the police, and there are still lots of problems to prevent.
The second and better response from the Negative is to say that you're going to go to other platforms, things like Snapchat or use of VPNs. The Affirmative points out that you aren't really sufficiently able to hide from that completely, and the way that data works in the modern society is it's always still trackable, and you can't ever really be anonymous. So we are still able to prevent people, even if they tried to evade the scope of this policy.
And the final response of the Negative was well, the algorithm doesn't really work that well. And their reliance on it, in many cases, meant that perhaps you weren't able to fully track kinds of bullying. We thought as a panel that came out a bit late, most stressed at third Negative, but really just minimised the forms of effectiveness of this policy. And we were still willing to believe that this would, in many cases, prevent the kinds of things the Affirmative were talking about.
And given the framing of this debate, where the kinds of things that were being prevented were acts of terrorism or bullying that would destroy people's lives, or child sexting that would be really harmful for lots of people, any prevention of those that were able to be proven, as we think the Affirmative did, were more important than the harms to privacy from the Negative, and for that reason, we've given the debate to the Affirmative.
AMY KORALIS: A team member of Colo High School will now congratulate the winners, and a member of the winning team will then respond.
NOAH VANDERBURG: OK. First of all, to you guys, great job. You really deserve to be here, and you did a fantastic job on your debate. So congratulations. You really deserve it. Good job.
Also, to our wonderful panel of adjudicators, they're really talented. And for them to take time out of their day to sit down and watch our debate, we're really thankful. And I'm sure not just our team, but you guys as well. So thank you so much for your time.
And then also to our wonderful debating teacher, who drove us around and took us to Mr. McDonald's and everything. Thank you so much for taking the time out of your days and your lunch times to sit down with us and talk about random topics that probably have no impact on our lives whatsoever. So thanks so much.
And then also thank you, audience, and all our supporters from school, because for you guys to come down here, it's really great. Thanks to you guys. And finally, my team. To Clare Bailey and Isaak, we've been together for a long time, and unfortunately, this is our last debate. So. Aw. Yeah. So thanks to everyone. Have a good day.
CHARLOTTE BARRY: Thank you so much for the Teachers Federation for hosting this event. Thank you so much to the panel of adjudicators for your dedication and time to adjudicating this debate. Audience, thank you so much. You've been really good, really quiet, and it really means a lot to us that you're here, and we appreciate you.
Thank you so much to the Arts Unit and the coordinators for organising this whole event. It's been really fantastic from the start to the finish. And also thank you so much to the opposition. You provided a really hard debate for us today, and we really enjoyed debating you. So thank you. And we wish you luck in your future debates, even though you said you won't. Don't know.
AMY KORALIS: I now call an Andrew Lasaitis, Speaking Competitions Officer, to return to the stage for the presentations. Ladies and gentlemen, from Colo High School, the first speaker, Clare Adamson.
From Colo High School, second speaker, Bailey Langham. Third speaker from Colo High School, Isaak Salami. Fourth speaker, Noah Vanderburg. And their coach, Kristin Smith.
From Hornsby Girls High School, first speaker, Emma Hancock. Second speaker, Kaitlyn Kong. Third speaker, Karina Mathias. Fourth speaker, Charlotte Barry. And their coach, Emily Thomas. Could the winning team come to the front of the stage, where Angela [? sias ?] will present them with the Teasdale Trophy as state champions.
End of transcript