Search Results

Search found 10673 results on 427 pages for 'recovery disk'.

Page 352/427 | < Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >

  • Is writing to a socket an arbitrary limitation of the sendfile() syscall?

    - by Sufian
    Prelude sendfile() is an extremely useful syscall for two reasons: First, it's less code than a read()/write() (or recv()/send() if you prefer that jive) loop. Second, it's faster (less syscalls, implementation may copy between devices without buffer, etc...) than the aforementioned methods. Less code. More efficient. Awesome. In UNIX, everything is (mostly) a file. This is the ugly territory from the collision of platonic theory and real-world practice. I understand that sockets are fundamentally different than files residing on some device. I haven't dug through the sources of Linux/*BSD/Darwin/whatever OS implements sendfile() to know why this specific syscall is restricted to writing to sockets (specifically, streaming sockets). I just want to know... Question What is limiting sendfile() from allowing the destination file descriptor to be something besides a socket (like a disk file, or a pipe)?

    Read the article

  • How to move Mailboxes over from old Exchange 2007 to new EBS 2008 network?

    - by Qwerty
    This q is similar to: http://serverfault.com/questions/39070/how-to-move-exchange-2003-mailbox-or-store-from-2003-to-2007-on-separate-networks Basically I am trying to move our exchange mailboxes over to a test domain that is hosting EBS2008 with Exchange 2007. We plan to move as soon as we can when we have our exchange data over. I have tried moving a db with mailboxes over but cannot get it to mount in the new Exchange in any way possible, including mounting it onto a recovery store. From my understanding the ONLY prerequisite for moving Exchange DBs across is that it must have the same Organizational name (unlike previous versions of Exchange). If anyone has any insight as to why I cannot mount and simply reattach the mailboxes, please give me an idea as to what could be wrong. It should be as simple as this. Note that the DBs I have are in a clean state. I cannot use ExMerge because I am not running any mailboxes on 2003. I have also tried using a 32bit Vista machine with the Export-Mailbox cmdlet to extract mailboxes but anything I do to it results in Permission errors. I have tried to troubleshoot these with no success. I am running in full admin with proper exchange roles and yet it still gives me access denied errors: Export-Mailbox : MapiExceptionNetworkError: Unable to make admin interface conn ection to server. (hr=0x80040115, ec=-2147221227) Also some errors show in the management console: get-MailboxDatabase Completed Warning: ERROR: Could not connect to the Microsoft Exchange Information Store service on server TATOOINE.baytech.local. One of the following problems may be occurring: 1- The Microsoft Exchange Information Store service is not running. 2- There is no network connectivity to server TATOOINE.baytech.local. 3- You do not have sufficient permissions to perform this command. The following permissions are required to perform this command: Exchange View-Only Administrator and local administrators group for the target server. 4- Credentials have been cached for an unpriviledged user. Try removing the entry for this server from Stored User Names and Passwords. Why I have to use a 32bit machine to export a simple .pst file is beyond me... So yeah I am now out of ideas and any help would be great! Thanks in advance.

    Read the article

  • how does NTFS actually work with B-tree ?

    - by bakra
    To improve performance, NTFS directories use a special data management structure called a B-tree. "B-tree" concept here refers to a "tree of storage units" that hold the contents of an individual directory. What I don't understand is where on the disk is this tree stored? Its surely not created every-time we reboot...that would take lots of time. and since its a tree(dynamic Data structure) unlike arrays it will grow. so space needs to be allocated every-time it grows. so how is this "dynamic meta-data" stored ?

    Read the article

  • Optimising local image loading/rendering on iPhone

    - by Tricky
    Hi, I'm looking to create an interface where the user can navigate through large volumes of images. Each image has a thumbnail of 128x128 that I wish to display and will be kind of similar to coverflow in operation. I have this all working in principle but am becoming stuck when navigating through content at speed. The interface begins to stutter and becoming jerky. I believe this is primarily because of disk i/o and the cost of rendering each image. Is there anyway this can be handed over to a seperate thread simply? Defaulting to a greyed out thumbnail until the image has loaded? How have Apple managed to achieve this in coverflow? Many thanks,

    Read the article

  • Implementing ma? osx loop device driver.

    - by Inso Reiges
    Hello, I am aware that there is a native implementation of loop device system on osx, the hdiutil/hdix driver. But due to a number of reasons i need to roll out my own custom loop driver. Since hdix is closed-source can anyone give some starting pointers, links, advises, etc. on the subject? I had expirience implementing loop drivers on linux and windows but i don't really have a clue where to start from on osx. The basic functionality i need to implement is the same: given any file on disk, simulate a virtual block device interface for it. I also would like my loop driver to use all the native partition filter stacking available for real and hdix virtual block devices on osx.

    Read the article

  • Stream (.NET) handling best-practices

    - by Jader Dias
    The question is entitled with the word "Stream" because the question below is a concrete example of a more generic doubt I have about Streams: I have a problem that accepts two solutions and I want to know the best one: I download a file, save it to disk (2 min), read it and write the contents to the DB (+ 2 min). I download a file and write the contents directly to the DB (3 min). If the write to DB fails I'll have to download again in the second case, but not in the first case. Which is best? Which would you use?

    Read the article

  • What are the best options for a root filesystem hosted on SSD under Linux

    - by stsquad
    I'm working on an embedded system which is going to be booting and hosting it's rootfs on an SSD disk. We are currently looking at using Intel X-18M SSDs. The file system structure will have a fairly static /usr section (modulo software upgrades) and an active /var and /var/log for maintaining state and logging. Given the wear-levelling done by the underlying flash does having separate partitions help or hinder? As modern SSDs appear as straight block devices and hide their mapping magic behind their firmware is there any point trying to optimise the choice of file-system that sits on-top of the SSD? Finally does enable SMART monitoring make any sense in this context or are their SSD specific ways of determining the underlying health of the storage hardware?

    Read the article

  • Writing temporary data from R

    - by Shane
    I want to write some temporary data to disk in an R package, and I want to be sure that it can run on every OS without assuming the user has admin rights. Is there an existing R function that can provide a path to a temporary directory on all major OS's? Or a way to reference a user's home directory? Otherwise, I was thinking of trying this: Sys.getenv("temp") I presume that I can't expect people to have write access to their R locations, otherwise I could reference a path within the package directory: .find.package("package.name").

    Read the article

  • Simple but efficient way to store a series of small changes to an image?

    - by finnw
    I have a series of images. Each one is typically (but not always) similar to the previous one, with 3 or 4 small rectangular regions updated. I need to record these changes using a minimum of disk space. The source images are not compressed, but I would like the deltas to be compressed. I need to be able to recreate the images exactly as input (so a lossy video codec is not appropriate.) I am thinking of something along the lines of: Composite the new image with a negative of the old image Save the composited image in any common format that can compress using RLE (probably PNG.) Recreate the second image by compositing the previous image with the delta. Although the images have an alpha channel, I can ignore it for the purposes of this function. Is there an easy-to-implement algorithm or free Java library with this capability?

    Read the article

  • Unecrypted Image of Truecrypt-Encrypted System Partition

    - by Dexter
    The general tenor around the internet seems to be that you can't create images of system partitions that have been encrypted (with truecrypt) other than with dd or similar sector-by-sector copy tools. These files however are very impractical given their size (and are obviously incompressible) which makes keeping multiple states/backups of your system partition rather expensive (..especially considering current hdd prices). The problem is that backup tools (like Acronis True Image, Clonezilla, etc.) won't give you the option to create an image of (mounted/opened) Truecrypt partitions, or that there is no recovery environment for restoring the backup, that would allow to run truecrypt before doing any actual restoring. After some trial and error however, I believe I have found a very simple way. Since Truecrypt (running in Linux) creates a virtual block device, that it uses for mounting the unencrypted partitions into the file system, partclone can be used for creating/restoring images. What I did: boot up a linux live disk mount/open the drive/device/partition in truecrypt unmount the filesystem mount point again, like so: umount /media/truecryptX ("X" being the partition number assigend by truecrypt) use partclone (this is what clonezilla would do too, except that clonezilla only offers you to back up real drive partitions, not virtual block devices): partclone.ntfs -c -s /dev/mapper/truecryptX -o nameOfBackupFile for restoring steps 1-3 remain the same, and step 4 is partclone.ntfs -r -s nameOfBackupFile -o /dev/mapper/truecryptX A backup and test-restore of the system (with this method) seems to have worked fine (and the changed settings were reverted to the backup-state). The backup file is ~40 GB (and compressible down to <8GB with 7zip/LZMA2 on the "fast" setting). I can't quite believe that I'm the only one that wants to create images of encrypted drives, but doesn't want to waste 100GB on the backup of one single system state. So my question now is, given how simple this was, and that no one seems to mention anywhere that this is possible - did I miss something? or did I do something wrong? Is there any situation that I didn't think of where this method will fail? Obviously, the backup file needs to be stored in some other encrypted place in order to still remain confidential, since it is unencrypted. Also, in order to do a full "bare metal" restore, one would have to actually first (re-)install Windows, encrypt it, and only then restore the backup file. The funny thing however is that you won't need to backup any partition tables, etc. since the reinstall will effectively take care of that. Is there anything else? This is imho still a lot better than having sector-by-sector images..

    Read the article

  • Critiquing my first Python script

    - by tipu
    A little bit of background: I'm building an inverted index for a search engine. I was originally using PHP, but because of the amount of times I needed to write to disk, I wanted to make a threaded indexer. There's a problem with that because PHP is not thread safe. I then tried Java, but I ended up with at least 20 try catch blocks because of the JSON data structure I was using and working with files. The code was just too big and ugly. Then I figured I should pick up some Python because it's flexible like PHP but also thread safe. Though I'm open to all criticism, what I'd like to learn is the shortcuts that the Python language/library provides that I skipped over. This is a PHP-afide Python script because all I really did was translate the PHP script line by line to what I thought was it's Python equivalent. Thanks. http://pastebin.com/xrg7rf9w

    Read the article

  • SDK for writing DVD's

    - by Matt Warren
    I need to add DVD writing functionality to an application I'm working on. However it needs to be able to write out files that are being grabbed "live" from a camera, over a long period of time. I can't wait until all the files are captured before I start writing them to the DVD, I need to write them out in chunks as I go along. I've looked at IMAPI v2, but the main problems seems to be that you need to point it to all the files you plan to write out to disk before you start the burning process. I know it has to concept of "sessions", which means you can write to the DVD in several parts, before you finally "close" it. But I was wondering if there were any other DVD writing SDK's that allow you to be constantly writing files to a DVD and in particular files that are only in memory. It would be more efficient if I didn't have to write the captured images out to hard before they are burned to DVD. The solution needs to work under .NET on Windows XP and vista

    Read the article

  • fd.seek() IOError: [Errno 22] Invalid argument

    - by Julian Kessel
    My Python Interpreter (v2.6.5) raises the above error in the following codepart: fd = open("some_filename", "r") fd.seek(-2, os.SEEK_END) #same happens if you exchange the second arg. w/ 2 data=fd.read(2); last call is fd.seek() Traceback (most recent call last): File "bot.py", line 250, in <module> fd.seek(iterator, os.SEEK_END); IOError: [Errno 22] Invalid argument The strange thing with this is that the exception occurs just when executing my entire code, not if only the specific part with the file opening. At the runtime of this part of code, the opened file definitely exists, disk is not full, the variable "iterator" contains a correct value like in the first codeblock. What could be my mistake? Thanks in advance

    Read the article

  • windows I/O manager - IRP's classification in read-like and write-like

    - by clyfe
    I am writing a windows filesystem minifilter driver that must fail IRP's in a preoperation callback. How can I find out from the callback parameters if the operation is read-like ( only reads data ) or it's write-like ( modifies data on the disk - write, delete etc ) ? I'm thinking on: Data->Iopb->TargetFileObject->ReadAccess Data->Iopb->TargetFileObject->WriteAccess But I'm not sure, I think thees are available only in postoperation callback. The documentation is really cumbersome. Code sample: FLT_PREOP_CALLBACK_STATUS Fail ( __inout PFLT_CALLBACK_DATA Data, __in PCFLT_RELATED_OBJECTS FltObjects, __deref_out_opt PVOID *CompletionContext ) { FLT_PREOP_CALLBACK_STATUS status = FLT_PREOP_SUCCESS_NO_CALLBACK; if ( IS WRITE_LIKE(Data, FltObjects) ) { // ??? HOW DO I FIND OUT???? if( FLT_IS_FASTIO_OPERATION(Data) ){ status = FLT_PREOP_DISALLOW_FASTIO; } else { status = FLT_PREOP_COMPLETE; } Data->IoStatus.Status = STATUS_ACCESS_DENIED; Data->IoStatus.Information = 0; return status; } return status; }

    Read the article

  • I want to version control my entire slice

    - by Tom
    I'm renting a slice (i.e., a VPS) from Slicehost. I've a spent a day or two filling up /usr with my favorite packages, /etc with configs and init scripts, and so on. Now I want to: save this whole setup somewhere (e.g., to load onto another machine). see what changes I've made to which files revert changes, tag revisions, and all that other good version control stuff Saving a disk image gives me (1), but not (2) and (3). Using Subversion (svn import / svn://someotherhost) might give me all three, but I expect problems if I actually try to check a project out into / and maintain .svn directories in root-owned areas. And to load my setup onto a fresh slice, I'd need to install an svn client on it first. Is there a good way to do what I want to do?

    Read the article

  • How do I get the name of the newest file via the Terminal?

    - by Alec
    I'm trying to create a macro for Keyboard Maestro for OS X doing the following: Get name of newest file in a directory on my disk based on date created; Paste the text "newest file: " plus the name of the newest file. One of its options is to "Execute a shell script", so I thought that would do it for 1. After Googling around a bit I came up with this: cd /path/to/directory/ ls -t | head -n1 This sorts it right, and returns the first filename. However, it also seems to includes a line break, which I do not want. As for 2: I can output the text "newest file: " with a different action in the app, and paste the filename behind that. But I'm wondering if you can't return "random text" + the outcome of the ls command. So my question is: can I do this only using the ls command? And how do I get just the name of the latest file without any linebreaks or returns?

    Read the article

  • What design considerations should one take to receive text and multiple attachments via web?

    - by ramesh.nagul
    I am developing a web application to accept a bunch of text and attachments (1 or more) via email, web and other methods. I am planning to build a single interface, mostly a web service to accept this content. What design considerations should I make? I am building the app using ASP.NET MVC 2. Should the attachments be saved to disk or in the database? Should the unified single interface be a web service? Pros and cons to using web services to upload files

    Read the article

  • Random access gzip stream

    - by jkff
    I'd like to be able to do random access into a gzipped file. I can afford to do some preprocessing on it (say, build some kind of index), provided that the result of the preprocessing is much smaller than the file itself. Any advice? My thoughts were: Hack on an existing gzip implementation and serialize its decompressor state every, say, 1 megabyte of compressed data. Then to do random access, deserialize the decompressor state and read from the megabyte boundary. This seems hard, especially since I'm working with Java and I couldn't find a pure-java gzip implementation :( Re-compress the file in chunks of 1Mb and do same as above. This has the disadvantage of doubling the required disk space. Write a simple parser of the gzip format that doesn't do any decompressing and only detects and indexes block boundaries (if there even are any blocks: I haven't yet read the gzip format description)

    Read the article

  • Opening a file from a pack URI in WPF

    - by cptmorgan
    Hi All, I am looking to open a .csv file from the application pack to do some unit testing. So what I would really love is some analog to File.ReadAllText(string path) which is instead X.ReadAllText(Uri uri). I haven't as yet been able to find this. Does anyone know if it is possible to read text / bytes (don't mind which) from a file in the pack without compiling this file to disk first? Oh and btw, File.ReadAllText(@"pack://application:,,,/SpreadSheetEngine/Tests/Example.csv") didn't work for me.. Thanks in advance.. Gav

    Read the article

  • Is it really wrong to version documents using CouchDB's default behaviour?

    - by Tomas Sedovic
    This is one of those "I know I shouldn't do this but it's oh so convenient." questions. Sorry about that. I plan to use CouchDB for storing a bunch of documents and keeping their entire revision history. CouchDB does the versioning automatically, but it is strongly discouraged for programmer's use: "You cannot rely on document revisions for any other purpose than concurrency control." From what I've found on the CouchDB wiki, the versions can get deleted either during compaction or during replication. As far as I can tell, Compaction must always be triggered manually and Replication occurs only when there's more than one database server. The question is: if I won't run compaction and will use only single database instance for my documents, can I just use CouchDB's document versioning and expect it to work? What other problems I might run into? E.g. does not running compaction hurt the performance or consume significantly more disk space (than if I did handle the versioning manually)?

    Read the article

  • Hell: NTFS "Restore previous versions"...

    - by ttsiodras
    The hell I have experienced these last 24h: Windows 7 installation hosed after bluetooth driver install. Attempting to recover using restore points via "Repair" on the bootable Win7 installation CD. Attempting to go back one day in the restore points. No joy. Attempting to go back two days in the restore points. No joy. Attempting to go back one week in the restore points. Stil no joy. Windows won't boot. Apparently something is REALLY hosed. And then it hits me - PANIC - the restore points somehow reverted DATA files to their older versions! Word, Powerpoint, SPSS, etc document versions are all one week old now. Using the "freshest" restore point. Failed to restore yesterday's restore point!!! I am stuck at old versions of the data!!! Booting KNOPPIX, mounting NTFS partition as read-only under KNOPPIX. Checking. Nope, the data files are still the one week old versions. Booting Win7 CD, Recovery console - Cmd prompt - navigating - yep, data files are still one week old. Removing the drive, mounting it under other Win7 installation. Still old data. Running NTFS undelete on the drive (read-only scan), searching for file created yesterday. Not found. Despair. At this point, idea: I will install a brand new Windows installation, keeping the old one in Windows.old (default behaviour of Windows installs). I boot the new install, I go to my C:\Data\ folder, I choose "Restore previous versions", click on yesterday's date, and click open... YES! It works! I can see the latest versions of my files (e.g. from yesterday). Thank God. And then, I try to view the files under the "yesterday snapshot-version" of c:\Users\MyAccount\Desktop ... And I get "Permission Denied" as soon as I try to open "Users\MyAccount". I make sure I am an administrator. No joy. Apparently, the new Windows installation does not have access to read the "NTFS snapshots" or "Volume Shadow Snapshots" of my old Windows account! Cross-installation permissions? I need to somehow tell the new Windows install that I am the same "old" user... So that I will be able to access the "Users\MyAccount" folder of the snapshot of my old user account. Help?

    Read the article

  • ZIPLIB problem on opening zip files

    - by Ahmet vardar
    I am using this class to create zip <?php // vim: expandtab sw=4 ts=4 sts=4: class zipfile { var $datasec = array(); var $ctrl_dir = array(); var $eof_ctrl_dir = "\x50\x4b\x05\x06\x00\x00\x00\x00"; var $old_offset = 0; function unix2DosTime($unixtime = 0) { $timearray = ($unixtime == 0) ? getdate() : getdate($unixtime); if ($timearray['year'] < 1980) { $timearray['year'] = 1980; $timearray['mon'] = 1; $timearray['mday'] = 1; $timearray['hours'] = 0; $timearray['minutes'] = 0; $timearray['seconds'] = 0; } // end if return (($timearray['year'] - 1980) << 25) | ($timearray['mon'] << 21) | ($timearray['mday'] << 16) | ($timearray['hours'] << 11) | ($timearray['minutes'] << 5) | ($timearray['seconds'] >> 1); } // end of the 'unix2DosTime()' method function addFile($data, $name, $time = 0) { $name = str_replace('\\', '/', $name); $dtime = dechex($this->unix2DosTime($time)); $hexdtime = '\x' . $dtime[6] . $dtime[7] . '\x' . $dtime[4] . $dtime[5] . '\x' . $dtime[2] . $dtime[3] . '\x' . $dtime[0] . $dtime[1]; eval('$hexdtime = "' . $hexdtime . '";'); $fr = "\x50\x4b\x03\x04"; $fr .= "\x14\x00"; // ver needed to extract $fr .= "\x00\x00"; // gen purpose bit flag $fr .= "\x08\x00"; // compression method $fr .= $hexdtime; // last mod time and date // "local file header" segment $unc_len = strlen($data); $crc = crc32($data); $zdata = gzcompress($data); $zdata = substr(substr($zdata, 0, strlen($zdata) - 4), 2); // fix crc bug $c_len = strlen($zdata); $fr .= pack('V', $crc); // crc32 $fr .= pack('V', $c_len); // compressed filesize $fr .= pack('V', $unc_len); // uncompressed filesize $fr .= pack('v', strlen($name)); // length of filename $fr .= pack('v', 0); // extra field length $fr .= $name; // "file data" segment $fr .= $zdata; // "data descriptor" segment (optional but necessary if archive is not // served as file) $fr .= pack('V', $crc); // crc32 $fr .= pack('V', $c_len); // compressed filesize $fr .= pack('V', $unc_len); // uncompressed filesize // add this entry to array $this -> datasec[] = $fr; // now add to central directory record $cdrec = "\x50\x4b\x01\x02"; $cdrec .= "\x00\x00"; // version made by $cdrec .= "\x14\x00"; // version needed to extract $cdrec .= "\x00\x00"; // gen purpose bit flag $cdrec .= "\x08\x00"; // compression method $cdrec .= $hexdtime; // last mod time & date $cdrec .= pack('V', $crc); // crc32 $cdrec .= pack('V', $c_len); // compressed filesize $cdrec .= pack('V', $unc_len); // uncompressed filesize $cdrec .= pack('v', strlen($name) ); // length of filename $cdrec .= pack('v', 0 ); // extra field length $cdrec .= pack('v', 0 ); // file comment length $cdrec .= pack('v', 0 ); // disk number start $cdrec .= pack('v', 0 ); // internal file attributes $cdrec .= pack('V', 32 ); // external file attributes - 'archive' bit set $cdrec .= pack('V', $this -> old_offset ); // relative offset of local header $this -> old_offset += strlen($fr); $cdrec .= $name; // optional extra field, file comment goes here // save to central directory $this -> ctrl_dir[] = $cdrec; } // end of the 'addFile()' method function file() { $data = implode('', $this -> datasec); $ctrldir = implode('', $this -> ctrl_dir); return $data . $ctrldir . $this -> eof_ctrl_dir . pack('v', sizeof($this -> ctrl_dir)) . // total # of entries "on this disk" pack('v', sizeof($this -> ctrl_dir)) . // total # of entries overall pack('V', strlen($ctrldir)) . // size of central dir pack('V', strlen($data)) . // offset to start of central dir "\x00\x00"; // .zip file comment length } // end of the 'file()' method function addFiles($files ) { foreach($files as $file) { if (is_file($file)) //directory check { $data = implode("",file($file)); $this->addFile($data,$file); } } } function output($file) { $fp=fopen($file,"w"); fwrite($fp,$this->file()); fclose($fp); } } // end of the 'zipfile' class ?> It creates zip file but when i try to open it on Mac os x snow leopard and windows 7, it doesnt open. on mac i had this error: Error 1: operation not permitted Any idea ? thanks

    Read the article

  • Rewrite Query String

    - by Virgil
    Hello, I am trying to write some mod_rewrite rules to generate thumbnails on the fly. So when this url example.com/media/myphoto.jpg?width=100&height=100 the script should rewrite it to example.com/media/myphoto-100x100.jpg and if the file exists on the disk it gets served by Apache and if it doesn't exist it is called a script to generate the file. I wrote this RewriteCond %{QUERY_STRING} ^width=(\d+)&height=(\d+) RewriteRule ^media/([a-zA-Z0-9_\-]+)\.([a-zA-Z0-9]+)$ media/$1-%1x%2.$2 [L] RewriteCond %{QUERY_STRING} ^(.+)? RewriteRule ^media/([a-zA-Z0-9_\-\._]+)$ media/index.php?file=$1&%1 [L] and I get infinite internal redirects. The first condition is matched and the rule is executed and right after that I get an internal redirect. I need advice to finish this script. Thank you.

    Read the article

  • Grand Central Strategy for Opening Multiple Files

    - by user276632
    I have a working implementation using Grand Central dispatch queues that (1) opens a file and computes an OpenSSL DSA hash on "queue1", (2) writing out the hash to a new "side car" file for later verification on "queue2". I would like to open multiple files at the same time, but based on some logic that doesn't "choke" the OS by having 100s of files open and exceeding the hard drive's sustainable output. Photo browsing applications such as iPhoto or Aperture seem to open multiple files and display them, so I'm assuming this can be done. I'm assuming the biggest limitation will be disk I/O, as the application can (in theory) read and write multiple files simultaneously. Any suggestions? TIA

    Read the article

  • (fluxus) learning curve

    - by Inaimathi
    I'm trying to have some fun with fluxus, but its manual and online docs all seem to assume that the reader is already an expert network programmer who's never heard of Scheme before. Consequently, you get passages that try to explain the very basics of prefix notation, but assume that you know how to pipe sound-card data into the program, or setup and connect to an OSC process. Is there any tutorial out there that goes the opposite way? IE, assumes that you already have a handle on the Lisp/Scheme thing, but need some pointers before you can properly set up sound sources or an OSC server? Barring that, does anyone know how to get (for example) the system microphone to connect to (fluxus), or how to get it to play a sound file from disk?

    Read the article

< Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >