Code Monkey home page Code Monkey logo

2016's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

2016's Issues

Entry: Towers in the Land of Smiling Chimeras

I'm new to this, but excited to try it out!

My idea for now is to do a progressively lipogrammatic novel a la Ella Minnow Pea: so the letters that are "allowed" to be used decrease as the novel goes on. Currently I have a basic outline of how to do this that may change, but here's the general idea:

  • generate a 50K text (using Markov chains at least to begin with), divided into chapters (each chapter will lose an additional letter eventually, but ignore that constraint in this step)
  • go through each chapter to find words that use the forbidden letters. in order,
  1. replace some fraction (yet to be determined) of banned words with synonyms (probably with WordNet), hoping that this leads to increasing vocabulary contrivedness as characters/the narrator try to get around the letter-based constraints.
  2. replace the rest of banned words (either ignored in the first step or things like prepositions, etc that would be hard to find synonyms for) with letters that look/sound similar to banned ones - e.g. Ella Minnow Pea uses "et" instead of "ed" for the past tense - "walket" instead of "walked". This part will probably be the hardest, because I'd like to play with sounds in some places but English vowels don't like to cooperate with that.
  3. black out/redact any words that remain - for which there weren't synonyms or there weren't good letter replacements - ideally the blacked-out portion of the text will increase as the novel goes on.

I imagine I'll have to toy with the ratios and letter replacements above to get something that's a balance of readable and going with the concept. I'm also not totally sure yet what type of texts I want to use to train my Markov model on - I think it'd be fun to do political/spy novels or something like that to play with the redacted elements, but I don't know if I'd be able to find enough text to get a good model.

Possible future improvements, once I get a basic model working:

  • Switch from Markov chains to some kind of neural net for generation of the initial text - I'm not sure about how these work but I'm interested in them, so this might be a good opportunity to try
  • try to improve the quality of the generated text - differentiating beginning from end, keeping consistent characters, things like that
  • look into "invented synonyms" - EMP does stuff like "aeiouy" for "vowel" and "grapheme" for "letter" that WordNet might not have, but that add something to the text.
  • playing around with the presentation/design, maybe adding poem-based epigrams to chapter headings

Entry : ???

Waffling between gutenburg analysis -> generator, or just plain random generation w/ towards a novel

A dictionary of an imaginary rhyming slang

Inspired by #41 , a dictionary of an imaginary rhyming slang that behaves a bit like cockney rhyming slang. The plan is to take the unix dictionary file, find all entries with multi-word rhymes in cmu's rhyming dictionary, add some entries that have gone through multiple iterations of this transformation (as an etymology), and sort the results by the rhyme rather than the original word, producing results in a dictionary-style format.

I'm not sure if this will produce enough entries to actually hit 50k words, but it's worth a try.

Have you ever heard the story about the Princess who made friends with a real, live dishwasher?

In the mornings, my 4-year-old daughter has been sing-telling me song-stories on the way to pre-school.

They all follow essentially the same formula:

Did you ever hear the story of the Princess who made friends with a real live [inanimate object, such as a window, tree branch, arm, steering wheel, or dishwasher]?

[Daddy says "no, I have not."]

Well, I'll tell it to you:

Once upon a time [she sings, kinda, all of this] there was a Princess who made friends with a real live dishwasher. Their names were Kate and Holly. The girl Princess's name was Kate and the dishwasher princess's name was Holly. [By this point, the singing has pretty much disappeared.]

Do you want to play a game Holly?

Oh yes.

Let's play hide-and-seek! You hide, and I'll count to 10 and find you!

Okay!

1, 2, ,3, 4, 5, ,6, 7, 8, 9, 10, ready or not here I come. Oh, I found you!

Yes, you did! Now it's my turn to count to ten.

Okay!

1, 2, ,3, 4, 5, ,6, 7, 8, 9, 10, ready or not here I come. Where can you be? There you are!

And they were bestest friends for EVER!

The games vary slightly, but this gives you an idea.

It ain't gonna be great, and it's gonna be very templatey, but it is templatey in her head. I think it might be fun to play with some of those kids-grappling-with-narrative and communication and see what that gives me. Attempts to count, for example - this morning she said "Do you know how to count to 100? 1, 2, 3, 4, 5, 6 7..." she got up to 25 and then said "One hundred!" Other days she's struggled to count to 20 skipping some numbers and doubling others out of sequence.

Or not. I'm curious what my daughter will think of the end-product.

Entry: REDACTED

I've never participated before and I'm pretty new to generating text, but I have two ideas for an entry:

  1. A spy novel comprised mostly of after-action reports. Mostly tools, targets and goals with code names for the important parties.

Finished!

It's far from perfect, but I've got 50 000 words of top secret memos.

Pdf here.

Source here.

I had to cheat with a find-and-replace to make the pdf behave because LaTex is awful.

Entry: Pirate Novel

As usual, I'll probably try a bunch of experiments at the start of the month to see what sticks, but my initial plan is to work on a scene expansion system to write coherent interactions between characters, with the scene parameters dictated by a plot generator. (Either tree-expansion one I came up with last year, or a new one.)

Other promising avenues include word2vec, neural networks, and wave function collapse, which has had some interesting experiments with text already by @mewo2 .

Entry: stealing Lovecraft's formula

H. P. Lovecraft has more variance than people give him credit for, but a lot of his stories really structured the same way: a scientifically-minded young man from Massachusetts with an anglo-saxon name studies something, discovers something unexpected about either his own family history or the history of the Earth, suffers a nervous breakdown during which he stops trusting his own senses, and kills himself, usually told as a set of letters or diary entries. Since other aspects of Lovecraft are easy to poorly imitate (using the words "eldrich" and "rugose" will do), it seems like it should be fairly straightforward to produce stories that are poor imitations of Lovecraft using templating. I'll start off with something similar to last year's "PKD pitches" entry, but I hope to turn this into something more closely resembling actual stories.

I expect to use GGC for this. I may extend what GGC is capable of doing, perhaps by adding the ability to import arbitrary python code.

I will be putting this project in http://github.com/enkiv2/misc/nanogenmo-2016 ; currently I'm gathering some corpora there for that & other nanogenmo projects.

A Children's Book

My idea, vaguely, is to make a children's book with pictures, something with a consistent structure and rhyming scheme but random/nonsensical content. Maybe I'll use some image generation / recognition to create appropriate illustrations.

AI AI

I have an idea to feed 5000 frames of the movie AI to an image captioning neural net and see what comes out. I think 5000 should give at least 50000 words. I may put in some randomish paragraph breaks and simulate chapters somehow. This might take longer than a month to run though.

Entry: Argotify, french slangifier

My last year's NaNoGenMo was about generating a theater piece novel out of the French 4chan-like forums of jeuxvideo.com. The aim was to recontextualize the worst of the French internet dialogues into a highly classical form (The result is here).

Now this year, I plan to do kind of the opposite : create a program that takes any given text and transforms it into French slang. I think it would be fun to read Descartes' Discourse on the Method in French slang for example. One of the inspirations for this idea is the French-famous Les boloss des Belles Lettres (it would translate in english as The Belles-Lettres' losers) who write summaries of literature classics in French slang.

There are many different ways of slang in French, I will try to implement several of them, as best as I can. Here's a list of French slang features I would like to implement:

  • Verlan (≈ Pig Latin in English?): cutting the word in two between two syllables and reverse the parts. Example: Ripou is the verlan of Pourri
  • Wesh: adding slang words or exclamations like "wesh", "gros", "sisi", "tkt", "tmtc", that are just used as punctuation ("Wesh" is the equivalent of "Yo" in English)
  • SMS writing: simplifying the writing of words using shortcuts like in English for example when you write sk8 instead of skate
  • Poseyyy: emphasizing the endings of words that finish with the sound "é" by adding "eyyyy". Sometimes also duplicating the first syllables in the word, for example here: Poposeyyy (a slang made famous by a strange rapper(?) called Swagg man)
  • Other: multiplying the punctuations (typical internet comment writing style), etc ?

Maybe I aim to do many different slang styles and features so I was thinking to also implement a range input for every feature so that users can choose its degree in the text transformation.

The program could be on a webpage using a textarea field for text input, but I would prefer it as a browser extension that allows it to be run on any visited webpage using Chrome or Firefox. It will be coded in Javascript.

I already have some resources for working on French syllables in a small JS package I created called metriques.js. I know how to create a Chrome extension but not Firefox so some advice about this would be helpful. Also, the goal of the extension being to alter the text content of a webpage, finding a code that already implements this kind of idea would be a good inspiration.

I would also love feedback on the idea or any information that would be help me in the project, even not in French!

Entry: Source Code

A novel where the source code is the sole corpus for the novel. The source of this entry is: "What if I just wrote a 50k work program that printed itself out?"

The Voynich II

I've been meaning to work with Artificial Neural Networks for a little while.
I will attempt a thing. Stay tuned.

Entry: Simm's Fairy Tales

Thinking about generating a collection of short fairy tales as they have similar plot structure (which means: more focus on readability and uniqueness). Not sure if i'll have enough time in November to finish this.

Entry: Excellent Fancy

Let me see. (takes the skull) Alas, poor Yorick! I knew him, Horatio, a fellow of infinite jest, of most excellent fancy. He hath borne me on his back a thousand times, and now, how abhorred in my imagination it is!

I'll be using the David Foster Wallace novel Infinite Jest as a training corpus for word2vec. Using this model and word embeddings, I'll rewrite Hamlet as a novel titled Excellent Fancy. Excellent Fancy will somehow develop the concept of 'ghost' in various ways and forms.

Entry: Gamebook of Starship Tropes

I don’t have as much time as I’d hoped this year, so instead of attempting something ambitious and algorithmic (really excited to see what others come up with here), I want to do something more mechanical, particularly as I have put a lot of time into crafting a really useful API/library for template grammars over the past few months. This approach will be the fastest and most intuitive way for me to actually finish something.

My current plan is to make a sequel to my 2015 entry using stock scifi and spaceship tropes with more sophisticated world gen than last time (maybe with the capability to visit different planets, star systems, etc).

Also hoping to make more extensive notes and document as I go, this time around.

Almost-but-not-quite

My main project will be to complete an npm module for getting texts that are almost-but-not-quite the same as the source text.

The idea is rougly the same as @dariusk's Harpooners and Sailors (here (source) and here (output+notes)) from last year - but wrapped up into a nice reusable package.

I think I would like to use such a module for other projects, so this is a good time to git-r-done.

Plus, I've been holding off the implementation of it until November, anyway.

Entry: Save The Cat

This is going to be a cheating pseudo-entry since I worked on it months ago, but, I do plan to work on it over the course of November since it might produce interesting results.

There's a popular manual for screenwriters called Save The Cat. (I think I mentioned this in resources last year, or possibly the previous.) It describes a fairly specific format for producing screenplays with a conventional 3-act structure and conventional set-ups and pay-offs, which is interesting for aspiring screenwriters since it gives plenty of structure (such that while a screenplay written by its rules may be mediocre and uninteresting, such a screenplay will never reach the heights of comical incompetence common in even low-budget professional films from 30 years ago). It's interesting from a generative fiction perspective because it promises a conventionally structured screenplay with primary and secondary character arcs out of a collection of 3-page chunks.

My implementation is here: https://github.com/enkiv2/savethecat

At the time of this writing, I have generation for settings & characters (based on the archetypes/ section of Corpora), a mechanism for producing dialog from templates (filling in scene introductions, character names, etc.), and an arrangement of these scenes based on the beat sheet (in other words, for each scene, we have the emotional change for the primary character, the conflicting characters or forces in the scene, the type of conflict as archetype, and some logic for ensuring that for "special" spots on the grid we have one of the appropriate conflict types).

What I don't have is: full 3-page templates, a nuanced templating engine, mechanisms to insert items that are supposed to appear in dialog at particular places (the statement of the theme on page 12, the six things that need to change between 1 and 10, the demonstrations that these things have changed in the third act), and mechanisms to ensure thematic consistency & continuity between scenes.

Just by adding a ton of 3-page templates, I could produce something that's clearly a screenplay (though repetitive and incoherent). I'll see whether or not that's feasible in November, but a better templating engine (maybe something with named rules & support for persistent names like GGC) will help.

Entry: The Early Adventures of Indiana Jones

Surely Indiana Jones had some less grand adventures before he landed on the silver screen. I'm going to attempt to generate an anthology of those adventures, probably using Python. 🎱

Entry: Field Guide

I plan on generating a field guide to a fictional location. The field guide will include maps, photos of nature, and lots of descriptions of native wildlife. Since the goal is to generate a novel, the book will be written from a first-person perspective, and the author's personal touch will be important.

The Track Method

The Story Compiler approach defined by Chris Pressey is interesting, particularly for its unique method of story generation...You start with a "null story"...

[IntroduceCharacters, *, CharactersConvalesce]

And you then insert a subplot in this null plot within the asterisk (Topic_1):

[IntroduceCharacters, *, Topic_1_Start, *, Topic_1_End, * CharactersConvalesce]

And you can recursively add in another subplot (Topic_2)...

[IntroduceCharacters, *, Topic_1_Start, Topic_2_Start, *, Topic_1_End, Topic_2_End, *, CharactersConvalesce]

And then keep adding more and more sub-plots to artificially lengthen the story:

[IntroduceCharacters, *, Topic_1_Start, Topic_2_Start, Topic_3_Start, Topic_4_Start, *, Topic_1_End, Topic_2_End, Topic_3_End, Topic_4_End, *, CharactersConvalesce]

But see, that requires code. And writing code is a problem in programming because of the dreaded word - maintenance.

Every line of code written comes at a price: maintenance. To avoid paying for a lot of code, we build reusable software. The problem with code re-use is that it gets in the way of changing your mind later on. ...
If we see ‘lines of code’ as ‘lines spent’, then when we delete lines of code, we are lowering the cost of maintenance. Instead of building re-usable software, we should try to build disposable software.---Write code that is easy to delete, not easy to extend.

So what if we just skip the Compiler stage? Hardcode the final generated story outline within the computer code itself... providing structure for the plots that the computer will arrange in a pleasing manner. Here's a hardcoded version of the outline:

  • Introduce Characters
  • Topic 1 Section
  • Topic 2 Section
  • Topic 3 Section
  • Topic 4 Section
  • Topic 1 Section
  • Topic 2 Section
  • ...
  • Topic 3 Section
  • Topic 4 Section
  • Characters Convalesce

We have all the benefits of organization through using a story compiler, without ever having to write out a story compiler.

We can go even further in eliminating code. The Story Compiler approach had continuity within the subplots...first, you have to talk about the McGuffin, then you have to lose the McGuffin, then you have to find it again. But maybe we can drop the need for continuity entirely in favor of each section of a subplot dealing with the same "topic". Each section of the subplot illustrating a specific aspect of the topic, and once we finish addressing that section, we can move onto addressing a new aspect of the topic. So, we could have a topic about a McGuffin...with three sections: the character loses the McGuffin, the character finds the McGuffin (or builds a new one), and the character talking about what the McGuffin does and why it is important. And the order of those sections are utterly irrelevant during the text generation process.

My goal during this month is to actually test out this approach by using a pre-generated corpus. The corpus itself will not be enough to generate a human-readable novel (I don't have enough words for that and don't necessarily want to go through the effort of finding 50,000 words). However, it is human-readable for over 5000 words, I am willing to declare it a decent success.


As a side-note: this approach was not my idea. I actually showed the Story Compiler approach to a literary publisher, and after a few days of discussion, he came up with this new approach - which I believe he called the "Track Method". Consider a runner who runs around a track. He reaches the first checkpoint - Topic 1. Then he runs and see the next checkpoint - Topic 2. He keeps running, encountering Topic 3. Finally, he makes it to the finishing line - Topic 4...and then he keeps running, back to Topic 1.

The runner can keep running forever and ever, but it's probably best to keep at 4 different topics with 4 circles around the track. And you can easily change the tone of the story by essentially changing the beginning and ending of the story (the IntroduceCharacters and CharactersConvalesce stages).

The Track Method can also be used to structure manual writing as well (making the Story Compiler probably the first story generation technique to inspire human writers), and the literary publisher mentioned to me the possibility of using the Track Method to help him write a speech.


EDIT: One important issue to note here though is that you do need to write transition phrases to explain why the author is moving away from one topic and is talking about a different topic (or, to continue with the running metaphor, what happens when the runner leaves the Topic 1 checkpoint and reaches the Topic 2 checkpoint)

These transition phrases can be very generic and interchangeable though (so you could write 16 different "stock" transition phrases).

Entry: Live coding a novel

Live coding a novel would be accomplished by having an algorithm spit out words in real time while being continually modified by the novelist.

The word speed is important. If it's too fast, the novel will develop too gradually and the product will be boring. If it's too slow, it will take too much of my time to complete. At a constant rate of 4 words per second (which is pretty fast, honestly), I can produce 57600 words in 4 hours, which seems like a reasonable amount of time especially when broken into sessions.

Entry: [made-up programming language] - The Definitive Guide

Does exactly what it says on the tin. Creates an entirely made-up programming language, decides on it's usage and characteristics, and creates a definite language guidebook for it.

Inspired by the many, many, many programming tomes that litter my office. Mostly unread apart from the really good ones.

Multiverse Crime Generator 2016

I am planning on participating! Yeah!

I'm going with a story compiler approach, as originally come up with by cpressey.

I am going to go with crimes. I have twelve so far - those remarked upon by Italo Calvino in his essay "Prose and Anticombinatorics". I'm probably going to add more.

The approach will go like so - plot generated, events generated, necessary amount of characters generated (murder crimes remove characters), assign actors and victims to these crimes, and then generate the sentences related to these stories.

Entry: #NaNoProcJamGenMo Toying with recursively generated video game narrative

I actually finished NaNoWriMo a couple of years ago, but with ProcJam this month I figured I'd give NaNoGenMo a try and kill three birds with one piece of code. Hopefully.

I've no idea what I'm doing, but so what?!

In my head, my approach will be to build a recursive storytelling program that starts with a basic structure (provided or generated), then breaks it down and fills in details on each level of recursion. That is, we start with a start and ending progression "A wizard gave a hobbit a ring. The hobbit threw the ring into the lava." And the program fills in a story with additional starts and ends, always respecting the story line already created, just adding characters, text, progression, and whatnot one level at a time.

Will this work? I've no idea. But you've got to start somewhere.

Simulation!

I'll be participating this year. Going with a more simulationist approach, I hope.

mechanical-turk'ed novel

I'm wondering about past examples of novel-generation working with Turkers. I'm thinking it might be interesting to create a crowdsourced\outsourced generated novel.

Fire up the pulp mill!

I've always wanted to do something like this. My theory is that a bunch of dumb ideas stuck together will create great things. Or at least something entertaining. My project is "pulpmill", which I hope will create trashy epic fantasy novels. Starting with of course generating the map and world-building.

Best place to follow for updates is on twitter at @joeld42 (https://twitter.com/joeld42), my git repo for this is
https://github.com/joeld42/jbd_nanogenmo

EDIT: Ping me on Twitter (or reply here) if you would like to be included as a possible character in the novel generator!

Intention to participate: Various ideas

Different ideas I would like to try out:

  • Run a simulation of some political intrigue with maybe some battles and backstabbery, then generate a natural-language "log" of events
  • Play around with using "words" as a graphical tool, and have some kind of drawing made of 50k+ words

Resources

This is an open issue where you can comment and add resources that might come in handy for NaNoGenMo.

There are already a ton of resources on the old resources threads for the 2013 edition, the 2014 edition, and the 2015 edition.

Actually fun computer-generated text adventure game

I've wanted to do this for a while, and I've been thinking about a lot of possibilities. I've done some Markov Chains in the past, but that wasn't exactly interesting. I'm thinking of making some sort of text adventure, because those typically follow a sort of 'formula', but are still interesting. Letting the human choose the plot through actions will make a story much more engaging and interesting than a computer could come up with on its own (for now at least!). I'll probably write it in python using the Tale library. Let's see what I can come up with!

In!

Not sure what to do yet, some throwaway ideas:

  • A boring diary. Or is it a rubbish reference book? Just clock times: "The time is twelve am. Twelve oh one am. Twelve oh two pm... And thirty seconds."
  • Chained tweets: 1. Find tweet, 2. find tweet starting with the previous tweet's last word, 3. repeat to 50k. Maybe ignore tweets with @ or links or #.
  • Dear Santa, I have been good this year and want: 1. [from Twitter, I want [thing]], 2. repeat to 50k.
  • A poet, struggling, gets stuck with words, tries others. Modify an existing epic poem, use Wordnik's related words.
  • A 50k version of Leevi and the Leavings' "Onnelliset". [Done in 2019]
  • A Dictionary of Loved and Hated words. A bit like A Dictionary of Not-A-Words but using @lovihatibot instead: perhaps a loved then a hated word. Perhaps a loved, and then the same word hated, with example tweets. Perhaps alongside definitions. Perhaps definitions. Perhaps a chronological report thing to show top ones from each month over the last year.
  • A friend says he has trouble reading Russian literature due to keeping track of the unfamiliar names. So perhaps rewrite some Dostoyevsky with names like Dennis, Michelle.

My stuff from 2015 and 2014.

Unnamed Songwriter Bot

Technically not a "novel", but this bot will generate full song tablature (focused on lyrics, chords are a stretch goal) based on a given keyword.

I'm in. Got too many ideas! Most are bad.

So yeah. Intention to participate.

I've got a few ideas.

  • Like last year, narrate a simulation of a scene/game.
  • Picture story book. Has a scene with some text above it. Could be silly. Would be a huge goofy file.
  • Regrets of many lives (Prompted by https://www.youtube.com/watch?v=8Yq6gM55_aY&feature=youtu.be)
  • Narrate transactions/diary entries at some sort of a store. Prices going up and down, angry customers, weather, sales, debt.
  • Story in the form of a git commit log. Bug fixes, hacks, boss pressure, random crap.
  • Run markov chain stuff through google translate, back, and through a grammar fixer (is that a thing?).
  • Smash memes together and watch the chaos. Grab random memes and try and cut them together.
  • Sentence magnifying glass. Zooming in on words replaces them with their definitions, which are zoomed on, which are zoomed in on...
  • Clickbait / News site title generator. Get lots of them and runs then through markov chains or something.
  • One big motivational poster. Collects quotes and advice and runs them through markov. Tries to apply typography to make a really really long image. (Like this just you can pretty much scroll forever.)
  • Family tree follower. Like those Bible passages where is just talks about decendents. Could get pretty crazy. Include death reasons heh.
  • Don't remove stuff apple. Joke on apply removing escape and earplugs. Apple adds bigger devices with varying features, of which old features are removed. 'Today - Apple removes the hologram, replacing it with a newer, black and white hologram.'

For the code, my favorite language is Haxe, and my entry last year was made with it, and that worked out okay. I'm considering doing it in Node JS too, but I'm still uncertain.

Github repo with sources of every idea/project.

Entry: Superphreak

This is a tentative note to indicate my intent to attempt a novel this November.

Inspired by the massive amount of data on phreak/hacker culture at textfiles.com, I plan to figure out how text might be run though the process of "boxing," as if the central consciousness of the novel is trying to wardial or autodial lists of numbers occasionally getting a respondent and interacting with them in some given conceptual way given the "box" they have in use at a given time.

Largely still an amorphous idea, I bring my concept to the hive mind as a note to say "I am doing this."

I've loved reading the logs these past years. Why not do it?

I've also given some thought to the concept of "trashing," and using geo-located dumpsters as a kind of marker for an adventure/collaged narrative from the various documents or manuals that my "phreaker" finds on their various trashing runs. As such, physical security could be a variable in terms of site selection, deterring my narrator from attempting to "dumpster dive."

The setting would almost certainly need to be the '80s in an urban area; probably the area of New Jersey where Bell and its various cognates resided for some time.

One barrier I see is finding the right data source. Given that I know DC has an open dataset (a bunch of them for people interested: opendata.dc.gov) for trash removal, there'd be a more convenient setting in DC. The New Jersey data I've found in brief searching doesn't quite fit what I'm looking for.

Ciphered Short Stories

Since I've recently joined GitHub, I think it might be interesting to tail the activity log and either generate a screen play or short chapters based on them. PERHAPS ON THE NANOGENMO REPOS THEMSELVES.

Yoko Ono / Fluxus / Grapefruit

  1. I love Yoko Ono's Grapefruit
  2. In the wonderful Finals Fantasy: Speculative Projects For Game Arts Students there is a wonderful provocation by Jake Elliott that could be a nice thing to generate.
## Week 2 - Instruction sets

Workshop: Fluxus.

Assignment: Write 150 short instruction set games. Print and bind them in a book. Name the book after a citrus fruit. Here is an example, from "Grapefruit" by Yoko Ono.

SNOW PIECE

Think that snow is falling. Think that snow is falling
everywhere all the time. When you talk with a person, think
that snow is falling between you and on the person.
Stop conversing when you think the person is covered by snow.

1963

An Atlas of Secret Sites

That's what I hope to make. All sorts of things happening in November may force me out, but I'm-a try!

Descriptanator 2000

I have a new theory this time... Mostly involving a "description" engine that can describe certain types of things with sort of pre-fab sentence structures. Unfortunately it will require a lot of upfront work, but I think it can work. I also have some crazy ideas that involve a genetic algorithm to mutate properties of the environment... We'll see :P

Entry: Epistolary Logs

My goal is to generate a novel that is solely told through a Space Station's system logs. I want to write all the code for this myself, so natural English sentences are pretty much out of the question. It would make sense for computer logs to be basic and built from templates.

I'm hoping to really lean into that nature of the logs and use to my advantage. Repetitive log entries (could) build tension and make anomalies fell more out of place (sort of like how Paranormal Activity 2 uses repeated camera loops to build tension).

As such, the main thing I'll be focusing on is writing something that builds plot points and keeps track of character locations. I also may build an associated search tool to let reader's explore the novel by trying to piece together plot points from anomalies.

I don't know if I can nail this idea, but I really think it has promise.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.