Difference between revisions of "BarnCamp 2017 notes"
Line 17: | Line 17: | ||
The struggle continues... | The struggle continues... | ||
+ | |||
+ | |||
+ | == Markov Chains and Bots == | ||
+ | |||
+ | A brief background on Markov Chain processes and their applications was discussed. Eventually an attempt was made (mainly outside the talk). | ||
+ | |||
+ | * Some examples of their use were given, mainly with text generators: | ||
+ | https://filiph.github.io/markov/ | ||
+ | http://rubberducky.org/cgi-bin/chomsky.pl | ||
+ | https://reddit.com/r/SubredditSimulator/ | ||
+ | * An attempt to reproduce a Trump bot was made, first by downloading his tweets (6 months worth). A method found below was used: | ||
+ | http://trumptwitterarchive.com/howto/all_tweets.html | ||
+ | {Note: When saving these to UTF-8 format, a lot of special characters caused problems. So some data cleaning is required} | ||
+ | * Some Python code on github was found and cloned into a working directory (along with the tweet file): | ||
+ | https://github.com/codebox/markov-text | ||
+ | * The file was passed through the package to create a database with transitional probabilities, by: | ||
+ | python markov.py parse trumpDB 2 trump_tweet.csv | ||
+ | {2 indicates the depth of the Markov Chain (e.g. takes 2 words either side into consideration for calculations) } | ||
+ | * Then to generate the text use: | ||
+ | python markov.py gen trumpDB 5 | ||
+ | {5 is the amount of sentences generated} | ||
+ | |||
+ | Next steps: | ||
+ | |||
+ | * Automate the process | ||
+ | * Edit code to respond to replies ‘intelligently‘ | ||
+ | * Link into chat applications (via APIs) | ||
+ | * Watch the choas ensue |
Revision as of 21:17, 13 June 2017
(still) re-using computers in our communities
We discussed the social and technical challenges of computer re-use. It was noted that there were multiple environmental advantages to re-use (as opposed to recycling) of all electronics; these are well-known.
Perhaps more interesting and providing opportunities for variety are the potential social/political benefits of re-use projects, including;
- opportunities for building community cohesion by getting (for example) people who can't afford new computers to cooperate with people who have tech jobs/knowledge
- providing a focus for commonality with established community groups, e.g. artistic, tech, green organisations
- merging 'after sales support' with basic skills education
- developing everyone's experience of localised mutual aid
- opportunities to have discussions with different people about consumerism and it's social/environmental harms
There are many obstacles to these kind of projects. Some of our experiences included;
- people wanting to get skills they can use "in the workplace", i.e. wanting to know how to do things in microsoft office so they can get a drudge office job because our options of how to get our material needs met still require this, for many of us
- even public sector organisations can be phobic about FLOSS options that could extend the useful life of machines
- data destruction and other regulatory requirements (WEEE licenses for example)
- demand for standardisation, whereas our strengths might be more to do with personalisation (focussing on our individual strengths and needs, and the particular needs of our customers/neighbours/mates)
The struggle continues...
Markov Chains and Bots
A brief background on Markov Chain processes and their applications was discussed. Eventually an attempt was made (mainly outside the talk).
- Some examples of their use were given, mainly with text generators:
https://filiph.github.io/markov/ http://rubberducky.org/cgi-bin/chomsky.pl https://reddit.com/r/SubredditSimulator/
- An attempt to reproduce a Trump bot was made, first by downloading his tweets (6 months worth). A method found below was used:
http://trumptwitterarchive.com/howto/all_tweets.html
{Note: When saving these to UTF-8 format, a lot of special characters caused problems. So some data cleaning is required}
- Some Python code on github was found and cloned into a working directory (along with the tweet file):
https://github.com/codebox/markov-text
- The file was passed through the package to create a database with transitional probabilities, by:
python markov.py parse trumpDB 2 trump_tweet.csv {2 indicates the depth of the Markov Chain (e.g. takes 2 words either side into consideration for calculations) }
- Then to generate the text use:
python markov.py gen trumpDB 5 {5 is the amount of sentences generated}
Next steps:
- Automate the process
- Edit code to respond to replies ‘intelligently‘
- Link into chat applications (via APIs)
- Watch the choas ensue