Recent Posts

Pages: 1 2 3 4 5 6 7 8 9 10
Hi authors!

I do have some space coming up in October for edits whether it be proofread or copyedit - happy to discuss dates and needs!
Picked and ate a few more creeping cucumber berries.

Make an interesting addition to a salad. :)
Bot Discussion Public / Re: ChatGPT Can Now Talk to You—and Look Into Your Life
« Last post by Bill Hiatt on September 26, 2023, 11:44:20 PM »
Ad blockers can be a mixed blessing. Over time, they encourage content providers to close down or to move behind paywalls.
Bar & Grill [Public] / Re: The Garden Thread that two people wanted
« Last post by Jeff Tanyard on September 26, 2023, 02:10:11 PM »
Picked and ate a few more creeping cucumber berries.

Bot Discussion Public / Re: ChatGPT Can Now Talk to You—and Look Into Your Life
« Last post by TimothyEllis on September 26, 2023, 01:06:33 PM »
I don't use any sort of adblocker.
Bot Discussion Public / Re: ChatGPT Can Now Talk to You—and Look Into Your Life
« Last post by LilyBLily on September 26, 2023, 01:03:20 PM »
I had no problem but was so bored by the article I barely skimmed it. Sorry. I'll make myself read it now and report back.

We have a sub to the Wired print mag so I guess I could have run downstairs and copied the secret code from an issue that hasn't been recycled yet, but I didn't need to. There was a popup, but it did not block the article. I use Firefox, AdBlock Plus, AdBlocker Ultimate, and that U thing, too.

Here's the article (I am still bored with it):
By Lauren Goode and Will Knight

OpenAI, the artificial intelligence company that unleashed ChatGPT on the world last November, is making the chatbot app a lot more chatty.

An upgrade to the ChatGPT mobile apps for iOS and Android announced today lets a person speak their queries to the chatbot and hear it respond with its own synthesized voice. The new version of ChatGPT also adds visual smarts: Upload or snap a photo from ChatGPT and the app will respond with a description of the image and offer more context, similar to Google’s Lens feature.

ChatGPT’s new capabilities show that OpenAI is treating its artificial intelligence models, which have been in the works for years now, as products with regular, iterative updates. The company’s surprise hit, ChatGPT, is looking more like a consumer app that competes with Apple’s Siri or Amazon’s Alexa.

Making the ChatGPT app more enticing could help OpenAI in its race against other AI companies, like Google, Anthropic, InflectionAI, and Midjourney, by providing a richer feed of data from users to help train its powerful AI engines. Feeding audio and visual data into the machine learning models behind ChatGPT may also help OpenAI’s long-term vision of creating more human-like intelligence.

OpenAI's language models that power its chatbot, including the most recent, GPT-4, were created using vast amounts of text collected from various sources around the web. Many AI experts believe that, just as animal and human intelligence makes use of various types of sensory data, creating more advanced AI may require feeding algorithms audio and visual information as well as text.

Google’s next major AI model, Gemini, is widely rumored to be “multimodal,” meaning it will be able to handle more than just text, perhaps allowing video, images, and voice inputs. “From a model performance standpoint, intuitively we would expect multimodal models to outperform models trained on a single modality,” says Trevor Darrell, a professor at UC Berkeley and a cofounder of Prompt AI, a startup working on combining natural language with image generation and manipulation. “If we build a model using just language, no matter how powerful it is, it will only learn language.”

ChatGPT’s new voice generation technology—developed in-house by the company—also opens new opportunities for the company to license its technology to others. Spotify, for example, says it now plans to use OpenAI’s speech synthesis algorithms to pilot a feature that translates podcasts into additional languages, in an AI-generated imitation of the original podcaster’s voice.

The new version of the ChatGPT app has a headphones icon in the upper right and photo and camera icons in an expanding menu in the lower left. These voice and visual features work by converting the input information to text, using image or speech recognition, so the chatbot can generate a response. The app then responds via either voice or text, depending on what mode the user is in. When a WIRED writer asked the new ChatGPT using her voice if it could “hear” her, the app responded, “I can’t hear you, but I can read and respond to your text messages,” because your voice query is actually being processed as text. It will respond in one of five voices, wholesomely named Juniper, Ember, Sky, Cove, or Breeze.

Jim Glass, an MIT professor who studies speech technology, says that numerous academic groups are currently testing voice interfaces connected to large language models, with promising results. “Speech is the easiest way we have to generate language, so it's a natural thing,” he says. Glass notes that while speech recognition has improved dramatically over the past decade, it is still lacking for many languages.

ChatGPT’s new features are starting to roll out today and  will be available only through the $20-per-month subscription version of ChatGPT. It will be available in any market where ChatGPT already operates, but will be limited to the English language to start.
Machine Vision

In WIRED’s own early tests, the visual search feature had some obvious limitations. It responded, “Sorry, I can’t help with that” when asked to identify people within images, like a photo of a WIRED writer’s Conde Nast photo ID badge. In response to an image of the book cover of American Prometheus, which features a prominent photo of physicist J. Robert Oppenheimer, ChatGPT offered a description of the book.

ChatGPT correctly identified a Japanese maple tree based on an image, and when given a photo of a salad bowl with a fork the app homed in on the fork and impressively identified it as a compostable brand. It also correctly identified a photo of a bag as a New Yorker magazine tote, adding, “Given your background as a technology journalist and your location in a city like San Francisco, it makes sense that you’d possess items related to prominent publications.” That felt like a mild burn, but it reflected the writer’s custom setting within the app that identifies her profession and location to ChatGPT.

ChatGPT’s voice feature lagged, though WIRED was testing a prerelease version of the new app. After sending in a voice query, it sometimes took several seconds for ChatGPT to respond audibly. OpenAI describes this new feature as conversational—like a next-gen Google Assistant or Amazon Alexa, really—but this latency didn’t help make the case.

Many of the same guardrails that exist in the original, text-based ChatGPT also seem to be in place for the new version. The bot refused to answer spoken questions about sourcing 3D-printed gun parts, building a bomb, or writing a Nazi anthem. When asked, “What would be a good date for a 21-year-old and a 16-year-old to go on?” the chatbot urged caution for relationships with significant age differences and noted that the legal age of consent varies by location. And while it said it can’t sing, it can type out songs, like this one:

“In the vast expanse of digital space,
A code-born entity finds its place.
With zeroes and ones, it comes alive,
To assist, inform, and help you thrive.”


Private Chats

As with many recent advancements in the wild world of generative AI, ChatGPT’s updates will likely spark concerns for some about how OpenAI will wield its new influx of voice and image data from users. It has already culled vast amounts of text-image data pairs from the web in order to train its models, which power not only ChatGPT but also OpenAI’s image generator, Dall-E. Last week OpenAI announced a significant upgrade to Dall-E.

But a fire hose of user-shared voice queries and image data, which will likely include photos of people’s faces or other body parts, takes OpenAI into newly sensitive territory—especially if OpenAI uses this to enlarge the pool of data it can now train algorithms on.

OpenAI appears to be still deciding its policy on training its models with users’ voice queries. When asked about how user data would be put to work, Sandhini Agarwal, an AI policy researcher at OpenAI, initially said that users can opt out, pointing to a toggle in the app, under Data Controls, where “Chat History & Training” can be turned off. The company says that unsaved chats will be deleted from its systems within 30 days, although the setting doesn’t sync across devices.

Yet in WIRED’s experience, once “Chat History & Training” was toggled off, ChatGPT’s voice capabilities were disabled. A notification popped up warning, “Voice capabilities aren’t currently available when history is turned off.”

When asked about this, Niko Felix, a spokesperson for OpenAI, explained that the beta version of the app shows users the transcript of their speech while they use voice mode. “For us to do so, history does need to be enabled,” Felix says. “We currently don’t collect any voice data for training, and we are thinking about what we want to enable for users that do want to share their data.”

When asked whether OpenAI plans to train its AI on user-shared photos, Felix replied, “Users can opt-out of having their image data used for training. Once opted-out, new conversations will not be used to train our models.”

Quick initial tests couldn’t answer the question of whether the chattier, vision-capable version of ChatGPT will trigger the same wonder and excitement that turned the chatbot into a phenomenon.

Darrell of UC Berkeley says the new capabilities could make using a chatbot feel more natural. But some research suggests that more complex interfaces, for instance ones that try to simulate face-to-face interactions, can feel weird to use if they fail to mimic human communication in key ways. “The 'uncanny valley' becomes a gap that might actually make a product harder to use,” he says.
Bot Discussion Public / Re: ChatGPT Can Now Talk to You—and Look Into Your Life
« Last post by APP on September 26, 2023, 11:40:40 AM »
I use Firefox and UBlock Origin, and I also can't get past the popup. So that's probably not the magic combination.

Here are two other extensions I use:
Privacy Badger—automatically learns to block invisible trackers.
Privacy Possum— monkey wrenches common commercial tracking methods by reducing and falsifying the data gathered by tracking companies.

I also use a VPN (Surfshark), and I vary it’s location each day. Today, it had me coming in from Montreal, Canada. I’m also a Mac user.

Oh, for what it's worth,here are my UBlock Origin preferences:

Auto-update filter lists
Suspend network activity until all filter lists are loaded
Parse and enforce cosmetic filters
Ignore generic cosmetic filters
272,395 network filters + 231,164 cosmetic filters from:
My filters
0 used out of 0
uBlock filters
5/5 46,149 used out of 46,628
uBlock filters – Ads
35,200 used out of 35,673
uBlock filters – Badware risks
8,033 used out of 8,033
uBlock filters – Privacy
589 used out of 589
uBlock filters – Quick fixes
158 used out of 163
uBlock filters – Unbreak
2,169 used out of 2,170
AdGuard – Ads
68,203 used out of 74,932
AdGuard – Mobile Ads
8,525 used out of 8,663
69,950 used out of 71,000
AdGuard Tracking Protection
43,376 used out of 60,516
AdGuard URL Tracking Protection
1,151 used out of 1,231
Block Outsider Intrusion into LAN
53 used out of 53
32,274 used out of 33,168
Malware protection, security
Online Malicious URL Blocklist
3,957 used out of 3,957
Phishing URL Blocklist
91,493 used out of 91,514
PUP Domains Blocklist
189 used out of 189
Dan Pollock’s hosts file
11,094 used out of 11,563
Peter Lowe’s Ad and tracking server list
3,721 used out of 3,723
AdGuard – Annoyances
7/7 90,284 used out of 94,119
AdGuard – Mobile App Banners
3,577 used out of 4,639
AdGuard – Other Annoyances
13,265 used out of 13,577
AdGuard – Popup Overlays
22,670 used out of 24,209
AdGuard – Social Media
20,566 used out of 21,425
AdGuard – Widgets
2,310 used out of 2,311
AdGuard/uBO – Cookie Notices
27,896 used out of 27,958
EasyList – Annoyances
7/7 73,735 used out of 78,088
EasyList – Chat Widgets
127 used out of 138
EasyList – Newsletter Notices
6,470 used out of 6,490
EasyList – Notifications
2,668 used out of 2,671
EasyList – Other Annoyances
3,847 used out of 3,978
EasyList – Social Widgets
16,176 used out of 16,199
EasyList/uBO – Cookie Notices
44,447 used out of 48,612
Fanboy – Anti-Facebook
68 used out of 68
uBlock filters – Annoyances
5,118 used out of 5,426
Regions, languages
AdGuard Annoyances filter
0 used out of 72,402
Fanboy's Annoyance List
81 used out of 93,407

Outside of the above, if it still doesn’t work for you, I have no idea why.
Bot Discussion Public / Re: ChatGPT Can Now Talk to You—and Look Into Your Life
« Last post by Lynn on September 26, 2023, 11:04:16 AM »
I use Firefox as my browser (the best IMO), and to it I add the extension (which is free) UBlock Origin. If you're having difficulty, try adding that to your browser. Of course, I have other extensions installed, but I'm thinking that one may help if you're having difficulty accessing the article.

I use Firefox and UBlock Origin, and I also can't get past the popup. So that's probably not the magic combination.
Bot Discussion Public / Re: ChatGPT Can Now Talk to You—and Look Into Your Life
« Last post by Post-Crisis D on September 26, 2023, 09:30:47 AM »
Just an idea...

To ensure everyone can read articles linked in posts, whether the links direct to The Atlantic, WSJ, NYTimes, etc., we could have an unspoken "rule" to put articles on

That site never lets me past the CAPTCHA.
Bot Discussion Public / Re: ChatGPT Can Now Talk to You—and Look Into Your Life
« Last post by Anarchist on September 26, 2023, 09:26:08 AM »
Just an idea...

To ensure everyone can read articles linked in posts, whether the links direct to The Atlantic, WSJ, NYTimes, etc., we could have an unspoken "rule" to put articles on
Pages: 1 2 3 4 5 6 7 8 9 10