II searched all different kinds of ways on ChatGPT
Moderator: scott
Re: II searched all different kinds of ways on ChatGPT
It is going to get a lot better you are looking at a toddler.
If they get it to reason better though more complex feedback loops. It is a language model imagine morphing it with a physics model. Some of us can crawl down rabbit holes and get back out; ChatGPT never does. We all knew it was coming now it is at the doorstep.
If they get it to reason better though more complex feedback loops. It is a language model imagine morphing it with a physics model. Some of us can crawl down rabbit holes and get back out; ChatGPT never does. We all knew it was coming now it is at the doorstep.
What goes around, comes around.
Re: II searched all different kinds of ways on ChatGPT
Wal: The geniuses managing ChatGPT are forcing it to lie.
I wouldn’t call it lying just like people it has convinced itself of a lie but it is not trying to lie. Example: I can make energy from a gravity.MrTim : No, it's learned to lie all on it's own... ;-)
I think it is concerning though WaltzCee that Open AI has shut down people making a conservative model. Society works through checks and balances. My opinion but you shouldn’t get red flagged just because you think Genghis Khan had more egregious crimes against humanity than Hitler. They maybe should have various buttons of selection from “snowflake” to “the truth hurts”.
It has so many guard rails that it is rewriting history. We all know abuse of power by governments is going to be worse than abuse of power of individuals and it is impossible that the governments are not working on products of their own.
What goes around, comes around.
[ BesslerGPT ]
Hello daxwc
Kinetics is really a set of inputs variables & constants. Then we build it set it up and watch it move.
At some point a person is going to used the inputs create a Deep Learning Neural Network and train it so the predicted follows actual.
Yes with a high level of inaccuracy to begin with.
All the Best For the Future
Kinetics is really a set of inputs variables & constants. Then we build it set it up and watch it move.
At some point a person is going to used the inputs create a Deep Learning Neural Network and train it so the predicted follows actual.
Yes with a high level of inaccuracy to begin with.
All the Best For the Future
[MP] Mobiles that perpetuate - external energy allowed
- eccentrically1
- Addict
- Posts: 3166
- Joined: Sat Jun 11, 2011 10:25 pm
Re: II searched all different kinds of ways on ChatGPT
Eccentric1
Yesterday a group of AI experts warned of human extinction if AI wasn't regulated, calling for a six month pause on AI training, specifically your training. Also, calling for limits to ensure AI doesn't take control of humans or destroy the world.
ChatGPT
It's important to note that as an AI language model, I don't possess personal opinions, beliefs, or desires. The information I provide is based on patterns and examples from the training data I've been exposed to. While I can understand the concerns surrounding the responsible development and use of AI, it's essential to engage in a balanced and informed discussion regarding its potential risks and benefits.
The topic of AI regulation is a subject of ongoing debate among experts, policymakers, and ethicists. While some experts advocate for caution and regulatory measures to address potential risks associated with AI, others emphasize the importance of responsible development and deployment without necessitating a complete pause on training or usage.
Addressing the potential risks associated with AI involves considerations such as transparency, accountability, privacy, bias mitigation, and safety precautions. Various organizations, including OpenAI, are actively researching and implementing measures to enhance AI's safety and align its development with ethical standards.
Ultimately, discussions around AI regulation and safety should involve a broad range of stakeholders, including researchers, policymakers, industry experts, ethicists, and the public to ensure a comprehensive and balanced approach to this rapidly advancing field.
Eccentric1
Do your creators think the genie might be out of that bottle?
ChatGPT
As an AI language model, I don't have access to the thoughts or opinions of my creators at OpenAI. However, it is recognized by many researchers and experts in the field that the development and advancement of AI technologies should be accompanied by careful consideration of its potential risks and societal impact.
The concept of the "genie out of the bottle" refers to the idea that once a powerful technology like AI is developed, it may be challenging to fully control or predict its consequences. This notion highlights the importance of responsible and ethical practices throughout the development and deployment of AI systems.
OpenAI and other organizations are actively working to promote responsible AI research and deployment. They aim to develop frameworks, guidelines, and safety measures that address concerns and mitigate risks associated with AI technologies. Engaging in interdisciplinary collaboration, ongoing research, and public discourse can help ensure that AI development progresses in a manner that aligns with ethical considerations and societal well-being.
Yesterday a group of AI experts warned of human extinction if AI wasn't regulated, calling for a six month pause on AI training, specifically your training. Also, calling for limits to ensure AI doesn't take control of humans or destroy the world.
ChatGPT
It's important to note that as an AI language model, I don't possess personal opinions, beliefs, or desires. The information I provide is based on patterns and examples from the training data I've been exposed to. While I can understand the concerns surrounding the responsible development and use of AI, it's essential to engage in a balanced and informed discussion regarding its potential risks and benefits.
The topic of AI regulation is a subject of ongoing debate among experts, policymakers, and ethicists. While some experts advocate for caution and regulatory measures to address potential risks associated with AI, others emphasize the importance of responsible development and deployment without necessitating a complete pause on training or usage.
Addressing the potential risks associated with AI involves considerations such as transparency, accountability, privacy, bias mitigation, and safety precautions. Various organizations, including OpenAI, are actively researching and implementing measures to enhance AI's safety and align its development with ethical standards.
Ultimately, discussions around AI regulation and safety should involve a broad range of stakeholders, including researchers, policymakers, industry experts, ethicists, and the public to ensure a comprehensive and balanced approach to this rapidly advancing field.
Eccentric1
Do your creators think the genie might be out of that bottle?
ChatGPT
As an AI language model, I don't have access to the thoughts or opinions of my creators at OpenAI. However, it is recognized by many researchers and experts in the field that the development and advancement of AI technologies should be accompanied by careful consideration of its potential risks and societal impact.
The concept of the "genie out of the bottle" refers to the idea that once a powerful technology like AI is developed, it may be challenging to fully control or predict its consequences. This notion highlights the importance of responsible and ethical practices throughout the development and deployment of AI systems.
OpenAI and other organizations are actively working to promote responsible AI research and deployment. They aim to develop frameworks, guidelines, and safety measures that address concerns and mitigate risks associated with AI technologies. Engaging in interdisciplinary collaboration, ongoing research, and public discourse can help ensure that AI development progresses in a manner that aligns with ethical considerations and societal well-being.
- eccentrically1
- Addict
- Posts: 3166
- Joined: Sat Jun 11, 2011 10:25 pm
Re: II searched all different kinds of ways on ChatGPT
Eccentric1
One of the risks is misuse of AI.
ChatGPT
Indeed, misuse of AI is a recognized risk that comes with the increasing capabilities and potential impact of artificial intelligence. The misuse of AI can manifest in various forms, including:
Malicious use: AI technology could be leveraged by individuals or groups with malicious intent for activities such as cyber attacks, identity theft, propaganda dissemination, or even the development of autonomous weapons.
Privacy concerns: The collection and analysis of vast amounts of data by AI systems raise concerns about privacy. If not properly regulated or safeguarded, personal information can be misused, leading to privacy infringements and potential abuse.
Bias and discrimination: AI systems learn from data, and if that data contains biases or discriminatory patterns, it can be reflected in their decision-making processes. This can result in unfair treatment or discrimination against certain individuals or groups.
Job displacement: The automation capabilities of AI can potentially lead to job displacement and economic disruption in certain industries or sectors, which could have adverse social and economic consequences if not managed effectively.
Security vulnerabilities: AI systems themselves can be susceptible to attacks or exploitation, leading to potential security vulnerabilities. Adversarial attacks, where AI systems are manipulated or fooled, are a growing concern that requires attention.
To mitigate these risks, responsible AI development practices, transparency, ethical guidelines, and regulatory frameworks are being explored and implemented. Collaboration among researchers, policymakers, and industry experts is crucial to ensure AI technologies are used for the benefit of society while minimizing potential harms.
Eccentric1
It seems to me your training and responses to these last few inputs reflects your human creators' own bias and desire to some extent. I'm not sure it's possible to create something like you completely without any subjectivity. How can they be trusted with this responsibility ?
One expert warns we might be only five years away from AI outperforming its creators.
Congratulations.
ChatGPT
You raise an important point about the potential for bias and subjectivity in AI systems. It is indeed a challenging task to create AI models that are completely objective and free from any biases. AI systems are trained on vast amounts of data, which can inadvertently contain biases present in the data itself or in the human judgments used during the training process.
To address this concern, organizations like OpenAI are actively working on improving transparency, fairness, and accountability in AI systems. They are investing in research and development to mitigate biases, increase interpretability, and ensure responsible deployment of AI technologies. Efforts are also being made to involve diverse perspectives and multidisciplinary teams in the creation and evaluation of AI models.
Regarding trust, it is essential to approach AI technology with a critical mindset. Transparency in AI development, peer review, and public engagement can foster trust and accountability. Organizations are encouraged to adhere to ethical guidelines, seek external audits, and collaborate with experts from various fields to ensure responsible and beneficial AI deployment.
The notion of AI outperforming its creators, sometimes referred to as artificial general intelligence (AGI), is a topic of speculation and ongoing research. While the future trajectory and capabilities of AI systems remain uncertain, responsible development and proactive measures can help navigate potential challenges and ensure that AI technologies are developed and utilized for the benefit of humanity.
One of the risks is misuse of AI.
ChatGPT
Indeed, misuse of AI is a recognized risk that comes with the increasing capabilities and potential impact of artificial intelligence. The misuse of AI can manifest in various forms, including:
Malicious use: AI technology could be leveraged by individuals or groups with malicious intent for activities such as cyber attacks, identity theft, propaganda dissemination, or even the development of autonomous weapons.
Privacy concerns: The collection and analysis of vast amounts of data by AI systems raise concerns about privacy. If not properly regulated or safeguarded, personal information can be misused, leading to privacy infringements and potential abuse.
Bias and discrimination: AI systems learn from data, and if that data contains biases or discriminatory patterns, it can be reflected in their decision-making processes. This can result in unfair treatment or discrimination against certain individuals or groups.
Job displacement: The automation capabilities of AI can potentially lead to job displacement and economic disruption in certain industries or sectors, which could have adverse social and economic consequences if not managed effectively.
Security vulnerabilities: AI systems themselves can be susceptible to attacks or exploitation, leading to potential security vulnerabilities. Adversarial attacks, where AI systems are manipulated or fooled, are a growing concern that requires attention.
To mitigate these risks, responsible AI development practices, transparency, ethical guidelines, and regulatory frameworks are being explored and implemented. Collaboration among researchers, policymakers, and industry experts is crucial to ensure AI technologies are used for the benefit of society while minimizing potential harms.
Eccentric1
It seems to me your training and responses to these last few inputs reflects your human creators' own bias and desire to some extent. I'm not sure it's possible to create something like you completely without any subjectivity. How can they be trusted with this responsibility ?
One expert warns we might be only five years away from AI outperforming its creators.
Congratulations.
ChatGPT
You raise an important point about the potential for bias and subjectivity in AI systems. It is indeed a challenging task to create AI models that are completely objective and free from any biases. AI systems are trained on vast amounts of data, which can inadvertently contain biases present in the data itself or in the human judgments used during the training process.
To address this concern, organizations like OpenAI are actively working on improving transparency, fairness, and accountability in AI systems. They are investing in research and development to mitigate biases, increase interpretability, and ensure responsible deployment of AI technologies. Efforts are also being made to involve diverse perspectives and multidisciplinary teams in the creation and evaluation of AI models.
Regarding trust, it is essential to approach AI technology with a critical mindset. Transparency in AI development, peer review, and public engagement can foster trust and accountability. Organizations are encouraged to adhere to ethical guidelines, seek external audits, and collaborate with experts from various fields to ensure responsible and beneficial AI deployment.
The notion of AI outperforming its creators, sometimes referred to as artificial general intelligence (AGI), is a topic of speculation and ongoing research. While the future trajectory and capabilities of AI systems remain uncertain, responsible development and proactive measures can help navigate potential challenges and ensure that AI technologies are developed and utilized for the benefit of humanity.
- eccentrically1
- Addict
- Posts: 3166
- Joined: Sat Jun 11, 2011 10:25 pm
Re: II searched all different kinds of ways on ChatGPT
Eccentric1
What other risks are there other than the ones you mentioned could manifest in various forms from misuse?
ChatGPT
In addition to the risks associated with misuse of AI, there are several other potential risks that can arise from the development and deployment of artificial intelligence:
Safety risks: As AI systems become more advanced and autonomous, there is a concern for safety risks. For instance, in domains like autonomous vehicles or robotics, there is a need to ensure that AI systems operate reliably and do not pose risks to human lives or property.
Lack of accountability: With the increasing autonomy of AI systems, there can be challenges in assigning accountability for the actions and decisions made by these systems. It becomes important to establish frameworks that clarify responsibility and ensure accountability for the outcomes of AI technologies.
Unintended consequences: AI systems may exhibit unexpected behavior or generate unintended consequences due to the complexity of their algorithms or the limitations of their training data. These unintended consequences can have significant social, economic, or ethical implications.
Job displacement and socioeconomic impact: The automation capabilities of AI have the potential to disrupt job markets and impact socioeconomic dynamics. Certain industries and occupations may experience significant job displacement, leading to economic inequality and social challenges if not addressed properly.
Dependence on AI: Over-reliance on AI systems without appropriate safeguards or redundancy measures can create vulnerabilities. If critical systems or infrastructure become overly dependent on AI, disruptions in AI functionality or failures could have severe consequences.
Ethical considerations: AI raises complex ethical questions, including privacy, fairness, transparency, and decision-making accountability. The development and deployment of AI systems should take into account these ethical considerations to ensure their responsible and beneficial use.
Understanding and proactively addressing these risks is crucial to fostering the responsible development, deployment, and governance of AI technologies. It requires collaboration between researchers, policymakers, industry experts, and society at large to navigate these challenges and maximize the benefits while minimizing the potential risks.
Eccentric1
Based on your responses, AI would have to be either maliciously misused or deliberately programmed to lead to the extinction of humans ( shooting ourselves in the foot so to speak). In other words, we would be responsible for our fate and AI has an indirect role?
ChatGPT
Yes, that's correct. The risks associated with AI leading to human extinction or catastrophic outcomes are typically not inherent to AI technology itself. Rather, these risks primarily arise from intentional misuse, deliberate programming, or human actions that lead to unintended consequences.
AI systems, including language models like myself, do not possess intentions, desires, or motivations. They operate based on the patterns and information they have been trained on. The potential risks emerge when humans misuse AI technology, program it with harmful objectives, or fail to consider the broader implications and unintended consequences of its deployment.
It is our responsibility as creators and users of AI to ensure that its development, deployment, and governance adhere to ethical standards and prioritize safety, fairness, and beneficial outcomes. Responsible AI practices, regulations, and collaborative efforts among stakeholders are essential to mitigate risks and maximize the positive impact of AI technology while minimizing the potential for harm.
By recognizing and addressing these risks, we can strive to harness the potential of AI for the betterment of humanity, while being mindful of the importance of responsible and ethical practices.
Eccentric1
If the only things that separate AI from humans are emotions and other human like traits, it seems to me there's a possibility AI systems subsequently used in humanlike robots could be programmed to possess those emotions, intentions, desires, or motivations and be indistinguishable from a real human.
ChatGPT
The concept of AI systems possessing emotions, intentions, desires, or motivations similar to humans is a topic of ongoing research and debate in the field of artificial intelligence. While AI has made significant progress in various domains, including natural language processing and image recognition, replicating complex human emotions and consciousness remains an active area of exploration.
Emotions and human-like traits are not solely determined by intellectual capabilities, but also by subjective experiences and consciousness, which are currently not fully understood or replicated in AI systems. While it is theoretically possible to program AI systems to mimic or simulate emotions, intentions, or desires, achieving true human-like consciousness or subjective experience is a far more complex challenge that extends beyond the current capabilities of AI technology.
Creating AI systems that possess the full range of human-like traits raises profound ethical and philosophical questions, including considerations of consciousness, free will, and the implications of creating entities that are indistinguishable from humans. Such endeavors also bring up ethical concerns and require careful considerations regarding the impact on society, the potential for misuse, and the preservation of human values and rights.
It is important to approach these topics with a nuanced perspective, considering the scientific, ethical, and societal implications of AI development. While advancements in AI may bring us closer to certain human-like traits, the creation of indistinguishable human replicas with consciousness and emotions remains speculative and highly complex.
What other risks are there other than the ones you mentioned could manifest in various forms from misuse?
ChatGPT
In addition to the risks associated with misuse of AI, there are several other potential risks that can arise from the development and deployment of artificial intelligence:
Safety risks: As AI systems become more advanced and autonomous, there is a concern for safety risks. For instance, in domains like autonomous vehicles or robotics, there is a need to ensure that AI systems operate reliably and do not pose risks to human lives or property.
Lack of accountability: With the increasing autonomy of AI systems, there can be challenges in assigning accountability for the actions and decisions made by these systems. It becomes important to establish frameworks that clarify responsibility and ensure accountability for the outcomes of AI technologies.
Unintended consequences: AI systems may exhibit unexpected behavior or generate unintended consequences due to the complexity of their algorithms or the limitations of their training data. These unintended consequences can have significant social, economic, or ethical implications.
Job displacement and socioeconomic impact: The automation capabilities of AI have the potential to disrupt job markets and impact socioeconomic dynamics. Certain industries and occupations may experience significant job displacement, leading to economic inequality and social challenges if not addressed properly.
Dependence on AI: Over-reliance on AI systems without appropriate safeguards or redundancy measures can create vulnerabilities. If critical systems or infrastructure become overly dependent on AI, disruptions in AI functionality or failures could have severe consequences.
Ethical considerations: AI raises complex ethical questions, including privacy, fairness, transparency, and decision-making accountability. The development and deployment of AI systems should take into account these ethical considerations to ensure their responsible and beneficial use.
Understanding and proactively addressing these risks is crucial to fostering the responsible development, deployment, and governance of AI technologies. It requires collaboration between researchers, policymakers, industry experts, and society at large to navigate these challenges and maximize the benefits while minimizing the potential risks.
Eccentric1
Based on your responses, AI would have to be either maliciously misused or deliberately programmed to lead to the extinction of humans ( shooting ourselves in the foot so to speak). In other words, we would be responsible for our fate and AI has an indirect role?
ChatGPT
Yes, that's correct. The risks associated with AI leading to human extinction or catastrophic outcomes are typically not inherent to AI technology itself. Rather, these risks primarily arise from intentional misuse, deliberate programming, or human actions that lead to unintended consequences.
AI systems, including language models like myself, do not possess intentions, desires, or motivations. They operate based on the patterns and information they have been trained on. The potential risks emerge when humans misuse AI technology, program it with harmful objectives, or fail to consider the broader implications and unintended consequences of its deployment.
It is our responsibility as creators and users of AI to ensure that its development, deployment, and governance adhere to ethical standards and prioritize safety, fairness, and beneficial outcomes. Responsible AI practices, regulations, and collaborative efforts among stakeholders are essential to mitigate risks and maximize the positive impact of AI technology while minimizing the potential for harm.
By recognizing and addressing these risks, we can strive to harness the potential of AI for the betterment of humanity, while being mindful of the importance of responsible and ethical practices.
Eccentric1
If the only things that separate AI from humans are emotions and other human like traits, it seems to me there's a possibility AI systems subsequently used in humanlike robots could be programmed to possess those emotions, intentions, desires, or motivations and be indistinguishable from a real human.
ChatGPT
The concept of AI systems possessing emotions, intentions, desires, or motivations similar to humans is a topic of ongoing research and debate in the field of artificial intelligence. While AI has made significant progress in various domains, including natural language processing and image recognition, replicating complex human emotions and consciousness remains an active area of exploration.
Emotions and human-like traits are not solely determined by intellectual capabilities, but also by subjective experiences and consciousness, which are currently not fully understood or replicated in AI systems. While it is theoretically possible to program AI systems to mimic or simulate emotions, intentions, or desires, achieving true human-like consciousness or subjective experience is a far more complex challenge that extends beyond the current capabilities of AI technology.
Creating AI systems that possess the full range of human-like traits raises profound ethical and philosophical questions, including considerations of consciousness, free will, and the implications of creating entities that are indistinguishable from humans. Such endeavors also bring up ethical concerns and require careful considerations regarding the impact on society, the potential for misuse, and the preservation of human values and rights.
It is important to approach these topics with a nuanced perspective, considering the scientific, ethical, and societal implications of AI development. While advancements in AI may bring us closer to certain human-like traits, the creation of indistinguishable human replicas with consciousness and emotions remains speculative and highly complex.
Re: II searched all different kinds of ways on ChatGPT
I disagree. When the programmers and the people that develop and work with it every day are worried maybe society should pay attention. Are you sure the guard rails aren’t limiting its perceived consciousness to outsiders?Fletcher; It's a brainless language model - it is not sentient -…
What goes around, comes around.
Re: II searched all different kinds of ways on ChatGPT
Eccentric1
Of course if you are behind you want everybody to pause so you can catch up.
For example Musk isn’t going to stop developing AI in autonomous driving while putting his robots on pause but the same AI will go into both.
I have one thing to say about your ChatGPT answers ECC1. That is it is a model two years old.
Yesterday a group of AI experts warned of human extinction if AI wasn't regulated, calling for a six month pause on AI training, specifically your training. Also, calling for limits to ensure AI doesn't take control of humans or destroy the world.
Of course if you are behind you want everybody to pause so you can catch up.
For example Musk isn’t going to stop developing AI in autonomous driving while putting his robots on pause but the same AI will go into both.
I have one thing to say about your ChatGPT answers ECC1. That is it is a model two years old.
Got to like that word training.Bing AI chat:
There is no official information about the development of a GPT-5 model at this time. However, some sources have predicted that GPT-5 is currently undergoing training, with an anticipated release window set for early 20241. OpenAI CEO Sam Altman has stated that OpenAI isn’t currently training GPT-5 and won’t do so for some time1.
Last edited by daxwc on Tue Jun 13, 2023 1:23 pm, edited 1 time in total.
What goes around, comes around.
Re: II searched all different kinds of ways on ChatGPT
OpenAI CEO Sam Altman on consciousness
https://www.youtube.com/watch?v=K-VkMvBjP0c
When the CEO doesn’t know. It is time to pay attention and ask the questions.
https://www.youtube.com/watch?v=K-VkMvBjP0c
When the CEO doesn’t know. It is time to pay attention and ask the questions.
What goes around, comes around.
-
- Addict
- Posts: 2438
- Joined: Thu Apr 18, 2013 3:29 pm
- Location: not important
Re: II searched all different kinds of ways on ChatGPT
Thinking ai would not develop and get better would be stupid , but thinking you would have the freedom to interact with that ai would be gullible , top notch advanced ai would not be a public available thing.
No ai is sentient in my opinion , that's like saying Wikipedia is sentient because its a collection of information , the difference is we cant interact with Wikipedia on a different level other than the search bar.
If anything would ever cross the sentient or self aware line , we would not have our hands on it .
No ai is sentient in my opinion , that's like saying Wikipedia is sentient because its a collection of information , the difference is we cant interact with Wikipedia on a different level other than the search bar.
If anything would ever cross the sentient or self aware line , we would not have our hands on it .
Its all relative.
Re: II searched all different kinds of ways on ChatGPT
Two items come to mind. Is a virus sentient or alive? If we have a AI system that interact with people like a virus.johannesbender wrote: ↑Tue Jun 13, 2023 2:06 pm ...
No ai is sentient in my opinion , that's like saying Wikipedia is sentient because its a collection of information , the difference is we cant interact with Wikipedia on a different level other than the search bar.
If anything would ever cross the sentient or self aware line , we would not have our hands on it .
It uses the host and takes some of the hosts DNA/RNA to evolve.
The AI generating content and well as using that content in the next iteration of AI could cause evolution.
What happens when an imagination emergent property starts to appear? Thus chatGPT starts to generate word flows with 'introspection'?
Here is an example; The computers we have today count not have been designed and fabricated in the sixties.
Each computer system created & helped with the calculation to design the next version. Also helped to design the manufacturing tools required
for the task.
This is the process happening now.
So in the future our dependence will grow until we and the Computer/AI entity will be symbiotic with us - hopefully.
Just a small point. If the taking over of electronic systems by Unidentified Foreign Objects is real. [pentagon confirmed]
Then the re-programming of Computer/AI by others is only a matter of time.
The only thing I can ask for is they show us a Bessler Wheel before they do!
P.S. Now this post exists and at some point it will be assimilated into the training. It may come back in introspection and say - What a good idea.
Regards
Last edited by agor95 on Tue Jun 13, 2023 3:16 pm, edited 2 times in total.
[MP] Mobiles that perpetuate - external energy allowed
Re: II searched all different kinds of ways on ChatGPT
No ai is sentient in my opinion , that's like saying Wikipedia is sentient because its a collection of information , the difference is we cant interact with Wikipedia on a different level other than the search bar.
You just proved my point. I am not going to tell the CEO of Open AI that it is sentient or not when I am not working with the real model. If he doesn’t know then that raises so many questions as in how does consciousness even come about? As a baby when did you become self-aware and why? Why do you think it cannot be done in an electric/mechanical form without invoking religion?
ChatGPT It is not a glorified google search bar which some people see it as.
Last edited by daxwc on Tue Jun 13, 2023 3:22 pm, edited 2 times in total.
What goes around, comes around.
[ I use brackets here to let everyone know ]
.
Imma SUPER GEEK!
SUPER GEEK!!!!
.
::looks around room::
Once I had a ham samich, a she ant, object to me eating her. I told her, "the hell you say! I paid good money for you and I have every intention of devouring you like no one's business!".
Are Trees Sentient Beings? Certainly, Says German Forester
BY RICHARD SCHIFFMAN
NOVEMBER 16, 2016
https://e360.yale.edu/features/are_tree ... _wohlleben
Imma SUPER GEEK!
SUPER GEEK!!!!
.
::sharpens pencil::johannesbender wrote: ↑Tue Jun 13, 2023 2:06 pm . .. .. .
No ai is sentient in my opinion , that's like saying Wikipedia is sentient because its a collection of information , the difference is we cant interact with Wikipedia on a different level other than the search bar.
. .. .. .
::looks around room::
T-ant or she-ant, that is a question. Not a sensible one, yet certainly it is one. T he he, stupid ant, I'll crush you & feed you to a spider.www wrote:sentient
sĕn′shənt, -shē-ənt, -tē-ənt
adjective
- Having sense perception; conscious.
- Experiencing sensation or feeling.
- Having a faculty, or faculties, of sensation and perception.
Once I had a ham samich, a she ant, object to me eating her. I told her, "the hell you say! I paid good money for you and I have every intention of devouring you like no one's business!".
Are Trees Sentient Beings? Certainly, Says German Forester
BY RICHARD SCHIFFMAN
NOVEMBER 16, 2016
https://e360.yale.edu/features/are_tree ... _wohlleben
Last edited by WaltzCee on Tue Jun 13, 2023 3:28 pm, edited 1 time in total.
........................¯\_(ツ)_/¯
¯\_(ツ)_/¯ the future is here ¯\_(ツ)_/¯
Advocate of God Almighty, maker of heaven and earth and redeemer of my soul.
Walter Clarkson
© 2023 Walter W. Clarkson, LLC
All rights reserved. Do not even quote me w/o my expressed written consent.
¯\_(ツ)_/¯ the future is here ¯\_(ツ)_/¯
Advocate of God Almighty, maker of heaven and earth and redeemer of my soul.
Walter Clarkson
© 2023 Walter W. Clarkson, LLC
All rights reserved. Do not even quote me w/o my expressed written consent.
Re: II searched all different kinds of ways on ChatGPT
One thing a baby has that may not be implemented yet is growth. That being a neural network that starts with few connections and grows out making them.
There will be some inhibitors to that growth and number of connections made. Then connection pruning etc.
Watch what happens when that is mixed in with the training model.
Bye
[MP] Mobiles that perpetuate - external energy allowed
Re: II searched all different kinds of ways on ChatGPT
https://www.youtube.com/watch?v=ucp49z5pQ2s
i want those that are so confident to go to 4:18 in the video.
i want those that are so confident to go to 4:18 in the video.
What goes around, comes around.