Search Results

Search found 30347 results on 1214 pages for 'jamie read'.

Page 194/1214 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • PHP: tips/resources/patterns for learning to implement a basic ORM

    - by BoltClock
    I've seen various MVC frameworks as well as standalone ORM frameworks for PHP, as well as other ORM questions here; however, most of the questions ask for existing frameworks to get started with, which is not what I'm looking for. (I have also read this SO question but I'm not sure what to make of it, and the answers are vague.) Instead, I figured I'd learn best by getting my hands dirty and actually writing my own ORM, even a simple one. Except I don't really know how to get started, especially since the code I see in other ORMs is so complicated. With my PHP 5.2.x (this is important) MVC framework I have a basic custom database abstraction layer, that has: Very simple methods like connect($host, $user, $pass, $base), query($sql, $binds), etc Subclasses for each DBMS that it supports A class (and respective subclasses) to represent SQL result sets But does not have: Active Record functionality, which I assume is an ORM thing (correct me if I'm wrong) I've read up a little about ORM, and from my understanding they provide a means to further abstract data models from the database itself by representing data as nothing more than PHP-based classes/objects; again, correct me if I am wrong or have missed out in any way. Still, I'd like some simple tips from anyone else who's dabbled more or less with ORM frameworks. Is there anything else I need to take note of, simple example code for me to refer to, or resources I can read? Thanks a lot in advance!

    Read the article

  • 1k of Program Space, 64 bytes of RAM. Is 1 wire communication possible?

    - by Earlz
    (If your lazy see bottom for TL;DR) Hello, I am planning to build a new (prototype) project dealing with physical computing. Basically, I have wires. These wires all need to have their voltage read at the same time. More than a few hundred microseconds difference between the readings of each wire will completely screw it up. The Arduino takes about 114 microseconds. So the most I could read is 2 or 3 wires before the latency would skew the accuracy of the readings. So my plan is to have an Arduino as the "master" of an array of ATTinys. The arduino is pretty cramped for space, but it's a massive playground compared to the tinys. An ATTiny13A has 1k of flash ROM(program space), 64 bytes of RAM, and 64 bytes of (not-durable and slow) EEPROM. (I'm choosing this for price as well as size) The ATTinys in my system will not do much. Basically, all they will do is wait for a signal from the Master, and then read the voltage of 1 or 2 wires and store it in RAM(or possibly EEPROM if it's that cramped). And then send it to the Master using only 1 wire for data.(no room for more than that!). So far then, all I should have to do is implement trivial voltage reading code (using built in ADC). But this communication bit I'm worried about. Do you think a communication protocol(using just 1 wire!) could even be implemented in such constraints? TL;DR: In less than 1k of program space and 64 bytes of RAM(and 64 bytes of EEPROM) do you think it is possible to implement a 1 wire communication protocol? Would I need to drop to assembly to make it fit? I know that currently my Arduino programs linking to the Wiring library are over 8k, so I'm a bit concerned.

    Read the article

  • About redirected stdout in System.Diagnostics.Process

    - by sforester
    I've been recently working on a program that convert flac files to mp3 in C# using flac.exe and lame.exe, here are the code that do the job: ProcessStartInfo piFlac = new ProcessStartInfo( "flac.exe" ); piFlac.CreateNoWindow = true; piFlac.UseShellExecute = false; piFlac.RedirectStandardOutput = true; piFlac.Arguments = string.Format( flacParam, SourceFile ); ProcessStartInfo piLame = new ProcessStartInfo( "lame.exe" ); piLame.CreateNoWindow = true; piLame.UseShellExecute = false; piLame.RedirectStandardInput = true; piLame.RedirectStandardOutput = true; piLame.Arguments = string.Format( lameParam, QualitySetting, ExtractTag( SourceFile ) ); Process flacp = null, lamep = null; byte[] buffer = BufferPool.RequestBuffer(); flacp = Process.Start( piFlac ); lamep = new Process(); lamep.StartInfo = piLame; lamep.OutputDataReceived += new DataReceivedEventHandler( this.ReadStdout ); lamep.Start(); lamep.BeginOutputReadLine(); int count = flacp.StandardOutput.BaseStream.Read( buffer, 0, buffer.Length ); while ( count != 0 ) { lamep.StandardInput.BaseStream.Write( buffer, 0, count ); count = flacp.StandardOutput.BaseStream.Read( buffer, 0, buffer.Length ); } Here I set the command line parameters to tell lame.exe to write its output to stdout, and make use of the Process.OutPutDataRecerved event to gather the output data, which is mostly binary data, but the DataReceivedEventArgs.Data is of type "string" and I have to convert it to byte[] before put it to cache, I think this is ugly and I tried this approach but the result is incorrect. Is there any way that I can read the raw redirected stdout stream, either synchronously or asynchronously, bypassing the OutputDataReceived event? PS: the reason why I don't use lame to write to disk directly is that I'm trying to convert several files in parallel, and direct writing to disk will cause severe fragmentation. Thanks a lot!

    Read the article

  • Interface for reading variable length files with header and footer.

    - by John S
    I could use some hints or tips for a decent interface for reading file of special characteristics. The files in question has a header (~120 bytes), a body (1 byte - 3gb) and a footer (4 bytes). The header contains information about the body and the footer is only a simple CRC32-value of the body. I use Java so my idea was to extend the "InputStream" class and add a constructor such as "public MyInStream( InputStream in)" where I immediately read the header and the direct the overridden read()'s the body. Problem is, I can't give the user of the class the CRC32-value until the whole body has been read. Because the file can be 3gb large, putting it all in memory is a be an idea. Reading it all in to a temporary file is going to be a performance hit if there are many small files. I don't know how large the file is because the InputStream doesn't have to be a file, it could be a socket. Looking at it again, maybe extending InputStream is a bad idea. Thank you for reading the confused thoughts of a tired programmer. :)

    Read the article

  • Speeding up inner-joins and subqueries while restricting row size and table membership

    - by hiffy
    I'm developing an rss feed reader that uses a bayesian filter to filter out boring blog posts. The Stream table is meant to act as a FIFO buffer from which the webapp will consume 'entries'. I use it to store the temporary relationship between entries, users and bayesian filter classifications. After a user marks an entry as read, it will be added to the metadata table (so that a user isn't presented with material they have already read), and deleted from the stream table. Every three minutes, a background process will repopulate the Stream table with new entries (i.e. whenever the daemon adds new entries after the checks the rss feeds for updates). Problem: The query I came up with is hella slow. More importantly, the Stream table only needs to hold one hundred unread entries at a time; it'll reduce duplication, make processing faster and give me some flexibility with how I display the entries. The query (takes about 9 seconds on 3600 items with no indexes): insert into stream(entry_id, user_id) select entries.id, subscriptions_users.user_id from entries inner join subscriptions_users on subscriptions_users.subscription_id = entries.subscription_id where subscriptions_users.user_id = 1 and entries.id not in (select entry_id from metadata where metadata.user_id = 1) and entries.id not in (select entry_id from stream where user_id = 1); The query explained: insert into stream all of the entries from a user's subscription list (subscriptions_users) that the user has not read (i.e. do not exist in metadata) and which do not already exist in the stream. Attempted solution: adding limit 100 to the end speeds up the query considerably, but upon repeated executions will keep on adding a different set of 100 entries that do not already exist in the table (with each successful query taking longer and longer). This is close but not quite what I wanted to do. Does anyone have any advice (nosql?) or know a more efficient way of composing the query?

    Read the article

  • Python to C# with openSSL requirement

    - by fonix232
    Hey there again! Today I ran into a problem when I was making a new theme creator for chrome. As you may know, Chrome uses a "new" file format, called CRX, to manage it's plugins and themes. It is a basic zip file, but a bit modified: "Cr24" + derkey + signature + zipFile And here comes the problem. There are only two CRX creators, written in Ruby or Python. I don't know neither language too much (had some basic experience in Python though, but mostly with PyS60), so I would like to ask you to help me convert this python app to a C# class. Also, here is the source of crxmake.py: #!/usr/bin/python # Cribbed from http://github.com/Constellation/crxmake/blob/master/lib/crxmake.rb # and http://src.chromium.org/viewvc/chrome/trunk/src/chrome/tools/extensions/chromium_extension.py?revision=14872&content-type=text/plain&pathrev=14872 # from: http://grack.com/blog/2009/11/09/packing-chrome-extensions-in-python/ import sys from array import * from subprocess import * import os import tempfile def main(argv): arg0,dir,key,output = argv # zip up the directory input = dir + ".zip" if not os.path.exists(input): os.system("cd %(dir)s; zip -r ../%(input)s . -x '.svn/*'" % locals()) else: print "'%s' already exists using it" % input # Sign the zip file with the private key in PEM format signature = Popen(["openssl", "sha1", "-sign", key, input], stdout=PIPE).stdout.read(); # Convert the PEM key to DER (and extract the public form) for inclusion in the CRX header derkey = Popen(["openssl", "rsa", "-pubout", "-inform", "PEM", "-outform", "DER", "-in", key], stdout=PIPE).stdout.read(); out=open(output, "wb"); out.write("Cr24") # Extension file magic number header = array("l"); header.append(2); # Version 2 header.append(len(derkey)); header.append(len(signature)); header.tofile(out); out.write(derkey) out.write(signature) out.write(open(input).read()) os.unlink(input) print "Done." if __name__ == '__main__': main(sys.argv) Please could you help me?

    Read the article

  • BinaryWriterWith7BitEncoding & BinaryReaderWith7BitEncoding

    - by Tim
    Mr. Ayende wrote in his latest blog post about an implementation of a queue. In the post he's using two magical files: BinaryWriterWith7BitEncoding & BinaryReaderWith7BitEncoding BinaryWriterWith7BitEncoding can write both int and long? using the following method signatures: void WriteBitEncodedNullableInt64(long? value) & void Write7BitEncodedInt(int value) and BinaryReaderWith7BitEncoding can read the values written using the following method signatures: long? ReadBitEncodedNullableInt64() and int Read7BitEncodedInt() So far I've only managed to find a way to read the 7BitEncodedInt: protected int Read7BitEncodedInt() { int value = 0; int byteval; int shift = 0; while(((byteval = ReadByte()) & 0x80) != 0) { value |= ((byteval & 0x7F) << shift); shift += 7; } return (value | (byteval << shift)); } I'm not too good with byte shifting - does anybody know how to read and write the 7BitEncoded long? and write the int ?

    Read the article

  • NSDictionary, NSArray, NSSet and efficiency

    - by ryyst
    Hi, I've got a text file, with about 200,000 lines. Each line represents an object with multiple properties. I only search through one of the properties (the unique ID) of the objects. If the unique ID I'm looking for is the same as the current object's unique ID, I'm gonna read the rest of the object's values. Right now, each time I search for an object, I just read the whole text file line by line, create an object for each line and see if it's the object I'm looking for - which is basically the most inefficient way to do the search. I would like to read all those objects into memory, so I can later search through them more efficiently. The question is, what's the most efficient way to perform such a search? Is a 200,000-entries NSArray a good way to do this (I doubt it)? How about an NSSet? With an NSSet, is it possible to only search for one property of the objects? Thanks for any help! -- Ry

    Read the article

  • Elegant way of parsing Data files for Simulation

    - by sc_ray
    I am working on this project where I need to read in a lot of data from .dat files and use the data to perform simulations. The data in my .dat file looks as follows: DeviceID InteractingDeviceID InteractionStartTime InteractionEndTime 1 2 1101 1105 1,2 1101 and 1105 are tab delimited and it means Device 1 interacted with Device 2 at 1101 ms and ended the interaction at 1105ms. I have a trace data sets that compile thousands of such interactions and my job is to analyze these interactions. The first step is to parse the file. The language of choice is C++. The approach I was thinking of taking was to read the file, for every line that's read create a Device Object. This Device object will contain the property DeviceId and an array/vector of structs, that will contain a list of all the devices the given DeviceId interacted with over the course of the simulation.The struct will contain the Interacting Device Id, Interaction Start Time and Interaction End Time. I have a two fold question here: Is my approach correct? If I am on the right track, how do I rapidly parse these tab delimited data files and create Device objects without excessive memory overhead using C++? A push in the right direction will be much appreciated. Thanks

    Read the article

  • ACL architechture for a Software As a service in Sprgin 3.0

    - by geoaxis
    I am making a software as a service using Spring 3.0 (Spring MVC, Spring Security, Spring Roo, Hibernate) I have to come up with a flexible access control list mechanism.I have three different kinds of users System (who can do any thing to the system, includes admin and internal daemons) Operations (who can add and delete users, organizations, and do maintenance work on behalf of users and organizations) End Users (they belong to one or more organization, for each organization, the user can have one or more roles, like being organization admin, or organization read-only member) (role like orgadmin can also add users for that organization) Now my question is, how should i model the entity of User? If I just take the End User, it can belong to one or more organizations, so each user can contain a set of references to its organizations. But how do we model the users role for each organization, So for example User UX belongs to organizations og1, og2 and og3, and for og1 he is both orgadmin, and org-read-only-user, where as for og2 he is only orgadmin and for og3 he is only org-read-only-user I have the possibility of making each user belong to one organization alone, but that's making the system bounded and I don't like that idea (although i would still satisfy the requirement) If you have a better extensible ACL architecture, please suggest it. Since its a software as a service, one would expect that alot of different organizations would be part if the same system. I had one concern that it is not a good idea to keep og1 and og2 data on the same DB (if og1 decides to spawn a 100 reports on the system, og2 should not suffer) But that is some thing advanced for now and is not directly related to ACL but to the physical distribution of data and setup of services based on those ACLs This is a community Wiki question, please correct any thing which you wish to do so. Thanks

    Read the article

  • Selectively disabling WebControl elements

    - by NeilD
    I have an ASP.Net MasterPage with a PlaceHolder element. The contents of the PlaceHolder can be viewed in two modes: read-write, and read-only. To implement read only, I wanted to disable all inputs inside the PlaceHolder. I decided to do this by recursively looping through the controls collection of the PlaceHolder, finding all the ones which inherit from WebControl, and setting control.Enabled = false;. Here's what I originally wrote: private void DisableControls(Control c) { if (c.GetType().IsSubclassOf(typeof(WebControl))) { WebControl wc = c as WebControl; wc.Enabled = false; } //Also disable all child controls. foreach (Control child in c.Controls) { DisableControls(child); } } This worked fine, and all controls are disabled... But then the requirement changed ;) NOW, we want to disable all controls except ones which have a certain CssClass. So, my first attempt at the new version: private void DisableControls(Control c) { if (c.GetType().IsSubclassOf(typeof(WebControl))) { WebControl wc = c as WebControl; if (!wc.CssClass.ToLower().Contains("someclass")) wc.Enabled = false; } //Also disable all child controls. foreach (Control child in c.Controls) { DisableControls(child); } } Now I've hit a problem. If I have (for example) an <ASP:Panel> which contains an <ASP:DropDownList>, and I want to keep the DropDownList enabled, then this isn't working. I call DisableControls on the Panel, and it gets disabled. It then loops through the children, and calls DisableControls on the DropDownList, and leaves it enabled (as intended). However, because the Panel is disabled, when the page renders, everything inside the <div> tag is disabled! Can you think of a way round this? I've thought about changing c.GetType().IsSubclassOf(typeof(WebControl)) to c.GetType().IsSubclassOf(typeof(SomeParentClassThatAllInputElementsInheritFrom)), but I can't find anything appropriate!

    Read the article

  • No transperancy in Bitmap loading from MemoryStream

    - by Jogi Joseph George
    Please see the C# code. When i am writing a Bitmap to a file and read from the file, i am getting the transperancy correctly. using (Bitmap bmp = new Bitmap(2, 2)) { Color col = Color.FromArgb(1, 2, 3, 4); bmp.SetPixel(0, 0, col); bmp.Save("J.bmp"); } using (Bitmap bmp = new Bitmap("J.bmp")) { Color col = bmp.GetPixel(0, 0); // ------------------------------ // Here col.A is 1. This is right. // ------------------------------ } But if I write the Bitmap to a MemoryStream and read from that MemoryStream, the transperancy has been removed. All Alpha values become 255. MemoryStream ms = new MemoryStream(); using (Bitmap bmp = new Bitmap(2, 2)) { Color col = Color.FromArgb(1, 2, 3, 4); bmp.SetPixel(0, 0, col); bmp.Save(ms, ImageFormat.Bmp); } using (Bitmap bmp = new Bitmap(ms)) { Color col = bmp.GetPixel(0, 0); // ------------------------------ // But here col.A is 255. Why? i am expecting 1 here. // ------------------------------ } I wish to save the Bitmap to a MemoryStream and read it back with transperancy. Could you please help me?

    Read the article

  • Which version of Grady Booch's OOA/D book should I buy?

    - by jackj
    Grady Booch's "Object-Oriented Analysis and Design with Applications" is available brand new in both the 2nd edition (1993) and the 3rd edition (2007), while many used copies of both editions are available. Here are my concerns: 1) The 2nd edition uses C++: given that I just finished reading my first two C++ books (Accelerated C++ and C++ Primer) I guess practical tips can only help, so the 2nd edition is probably best (I think the 3rd edition has absolutely no code). On the other hand, the C++ books I read insist on the importance of using standard C++, whereas Booch's 2nd edition was published before the 1998 standard. 2) The 2nd edition is shorter (608 pages vs. 720) so, I guess, it will be slightly easier to get through. 3) The 3rd edition uses UML 2.0, whereas the 2nd edition is pre-UML. Some reviews say that the notation in the 2nd edition is close enough to UML, so it doesn't matter, but I don't know if I should be worrying about this or not. 4) The 2nd edition is available in good-shape used copies for considerably less than what the 3rd one goes for. Given all the above factors, do you think I should buy the 2nd or the 3rd edition? Recommendations on other books are also welcome but I would prefer it if whoever answers has read at least one of the versions of Booch's book (preferably both!). I have already bought but not read GoF and Riel's books. I also know that I should practice a lot with real-life code. Thanks.

    Read the article

  • Trap SIGPIPE when trying to write without reader

    - by Matt
    I am trying to implement a named-pipe communication solution in BASH between two processes. The first process runs a script which echo something in a named-pipe: send(){ echo 'something' > $NAMEDPIPE } And the second script is supposed to read the named-pipe via another script which contains: while true;do if read line < $NAMEDPIPE;do someCommands fi done Not that the named pipe has been previously created using the traditional command mkfifo $NAMEDPIPE My problem is that the reader script is not always running so that if the writer script try to write in the named-pipe it stay blocked until a reader connect the pipe. I want to avoid this behavior, and a solution would be to trap a SIGPIPE signal. Indeed, according to man 7 signal is supposed to be send when trying to write in a pipe with no reader. So I changed my red function by: read(){ trap 'echo "SIGPIPE received"' SIGPIPE echo 'something' > $NAMEDPIPE } But when I run the reader script, the script stay blocked, and not "SIGPIPE received" appears... Am I mistaking on the signal mechanism or is there any better solution to my problem ? Thank you for your help.

    Read the article

  • How to prevent PHP variables from being arrays?

    - by MJB
    I think that (the title) is the problem I am having. I set up a MySQL connection, I read an XML file, and then I insert those values into a table by looping through the elements. The problem is, instead of inserting only 1 record, sometimes I insert 2 or 3 or 4. It seems to depend on the previous values I have read. I think I am reinitializing the variables, but I guess I am missing something -- hopefully something simple. Here is my code. I originally had about 20 columns, but I shortened the included version to make it easier to read. $ctr = 0; $sql = "insert into csd (id,type,nickname,hostname,username,password) ". "values (?,?,?,?,?,?)"; $cur = $db->prepare($sql); for ($ctr = 0; $ctr < $expected_count; $ctr++) { list ( $lbl, $type, $nickname, $hostname, $username, $password) = ""; $bind_vars = array(); $lbl = "csd_{$ctr}"; $type = $ref->itm->csds->$lbl->type; $nickname = $ref->itm->csds->$lbl->nickname; $hostname = $ref->itm->csds->$lbl->hostname; $username = $ref->itm->csds->$lbl->username; $password = $ref->itm->csds->$lbl->password; $bind_vars = array($id,$type,$nickname,$hostname,$username,$password); $res = $db->execute($cur, $bind_vars); # this is a separate function which works, but which only # does SELECTS and cannot be the problem. I include it because I # want to count the total rows. printf ("%d CSDs on that ITEM now.\n", CountCSDs($id_to_sync)); } P.S. I also tagged this SimpleXML because that is how I am reading the file, though that code is not included above. It looks like this: $Ref = simplexml_load_file($file);

    Read the article

  • PHP: What is an efficient way to parse a text file containing very long lines?

    - by Shaun
    I'm working on a parser in php which is designed to extract MySQL records out of a text file. A particular line might begin with a string corresponding to which table the records (rows) need to be inserted into, followed by the records themselves. The records are delimited by a backslash and the fields (columns) are separated by commas. For the sake of simplicity, let's assume that we have a table representing people in our database, with fields being First Name, Last Name, and Occupation. Thus, one line of the file might be as follows [People] = "\Han,Solo,Smuggler\Luke,Skywalker,Jedi..." Where the ellipses (...) could be additional people. One straightforward approach might be to use fgets() to extract a line from the file, and use preg_match() to extract the table name, records, and fields from that line. However, let's suppose that we have an awful lot of Star Wars characters to track. So many, in fact, that this line ends up being 200,000+ characters/bytes long. In such a case, taking the above approach to extract the database information seems a bit inefficient. You have to first read hundreds of thousands of characters into memory, then read back over those same characters to find regex matches. Is there a way, similar to the Java String next(String pattern) method of the Scanner class constructed using a file, that allows you to match patterns in-line while scanning through the file? The idea is that you don't have to scan through the same text twice (to read it from the file into a string, and then to match patterns) or store the text redundantly in memory (in both the file line string and the matched patterns). Would this even yield a significant increase in performance? It's hard to tell exactly what PHP or Java are doing behind the scenes.

    Read the article

  • Need an explanation on this code - c#

    - by ltech
    I am getting familiar with C# day by day and I came across this piece of code public static void CopyStreamToStream( Stream source, Stream destination, Action<Stream,Stream,Exception> completed) { byte[] buffer = new byte[0x1000]; AsyncOperation asyncOp = AsyncOperationManager.CreateOperation(null); Action<Exception> done = e => { if (completed != null) asyncOp.Post(delegate { completed(source, destination, e); }, null); }; AsyncCallback rc = null; rc = readResult => { try { int read = source.EndRead(readResult); if (read > 0) { destination.BeginWrite(buffer, 0, read, writeResult => { try { destination.EndWrite(writeResult); source.BeginRead( buffer, 0, buffer.Length, rc, null); } catch (Exception exc) { done(exc); } }, null); } else done(null); } catch (Exception exc) { done(exc); } }; source.BeginRead(buffer, 0, buffer.Length, rc, null); } From this article Article What I fail to follow is that how does the delegate get notified that the copy is done? Say after the copy is done I want to perform an operation on the copied file.

    Read the article

  • Check if TIFF file is complete

    - by Davi
    I have a FileSystemWatcher monitoring a directory that receives TIF files from a scanning device. Its necessary to check if the scanning process is done and then read a multipage TIF file, otherwise, the FileSystemWatcher will raise the event before the file be fully scanned. I have something like: private void OnFileCreated(...) { while(IsFileLocked(path)) Thread.Sleep(time); // OK to read } This is what is happening: - Scanner creates the file - FileSystemWatcher detects the file, but its in use - Scanner reads the first page to the file - Scanner releases the file - My code leaves the while(FileInUse(path)) - My code reads the incomplete file (problem) - Scanner adds more pages to the file Let's say the scanner is scanning 100 pages, then, when "OK to read", the file will be incomplete (99 pages left). So, its necessary to know if the file is complete or not. Maybe waiting some time to see if the file is modified, but this time span can be up to hours, because the scanner can get idle scanning the same TIF. Other solution would be checking some flag in the TIF file that indicates that the file is not complete(I've looked for this but don't found anything).

    Read the article

  • Archiver Securing SQLite Data without using Encryption on iPhone

    - by Redrocks
    I'm developing an iphone app that uses Core Data with a SQLite data store and lots of images in the resource bundle. I want a "simple" way to obfuscate the file structure of the SQLite database and the image files to prevent the casual hacker/unscrupulous developer from gaining access to them. When the app is deployed, the database file and image files would be obfuscated. Upon launching the app it would read in and un-obfuscate the database file, write the un-obfuscated version to the users "tmp" directory for use by core data, and read/un-obfuscate image files as needed. I'd like to apply a simple algorithm to the files that would somehow scramble/manipulate the file data so that the sqlite database data isn't discernible when the db is opened in a text editor and so that neither is recognized by other applications (SQLite Manager, Photoshop, etc.) It seems, from the information I've read, that I could use NSFileManager, NSKeyedArchiver, and NSData to accomplish this but I'm not sure how to proceed. Been developing software for many years but I'm new to everything CocoaTouch, Mac and iPhone. Also never had to secure/encrypt my data so this is new. Any thoughts, suggestions, or links to solutions are appreciated.

    Read the article

  • What makes people think that NNs have more computational power than existing models?

    - by Bubba88
    I've read in Wikipedia that neural-network functions defined on a field of arbitrary real/rational numbers (along with algorithmic schemas, and the speculative `transrecursive' models) have more computational power than the computers we use today. Of course it was a page of russian wikipedia (ru.wikipedia.org) and that may be not properly proven, but that's not the only source of such.. rumors Now, the thing that I really do not understand is: How can a string-rewriting machine (NNs are exactly string-rewriting machines just as Turing machines are; only programming language is different) be more powerful than a universally capable U-machine? Yes, the descriptive instrument is really different, but the fact is that any function of such class can be (easily or not) turned to be a legal Turing-machine. Am I wrong? Do I miss something important? What is the cause of people saying that? I do know that the fenomenum of undecidability is widely accepted today (though not consistently proven according to what I've read), but I do not really see a smallest chance of NNs being able to solve that particular problem. Add-in: Not consistently proven according to what I've read - I meant that you might want to take a look at A. Zenkin's (russian mathematician) papers after mid-90-s where he persuasively postulates the wrongness of G. Cantor's concepts, including transfinite sets, uncountable sets, diagonalization method (method used in the proof of undecidability by Turing) and maybe others. Even Goedel's incompletness theorems were proven in right way in only 21-st century.. That's all just to plug Zenkin's work to the post cause I don't know how widespread that knowledge is in CS community so forgive me if that did look stupid. Thank you!

    Read the article

  • How to use data receive event in Socket class?

    - by affan
    I have wrote a simple client that use TcpClient in dotnet to communicate. In order to wait for data messages from server i use a Read() thread that use blocking Read() call on socket. When i receive something i have to generate various events. These event occur in the worker thread and thus you cannot update a UI from it directly. Invoke() can be use but for end developer its difficult as my SDK would be use by users who may not use UI at all or use Presentation Framework. Presentation framework have different way of handling this. Invoke() on our test app as Microstation Addin take a lot of time at the moment. Microstation is single threaded application and call invoke on its thread is not good as it is always busy doing drawing and other stuff message take too long to process. I want my events to generate in same thread as UI so user donot have to go through the Dispatcher or Invoke. Now i want to know how can i be notified by socket when data arrive? Is there a build in callback for that. I like winsock style receive event without use of separate read thread. I also do not want to use window timer to for polling for data. I found IOControlCode.AsyncIO flag in IOControl() function which help says Enable notification for when data is waiting to be received. This value is equal to the Winsock 2 FIOASYNC constant. I could not found any example on how to use it to get notification. If i am write in MFC/Winsock we have to create a window of size(0,0) which was just used for listening for the data receive event or other socket events. But i don't know how to do that in dotnet application.

    Read the article

  • Parse a CSV file using python (to make a decision tree later)

    - by Margaret
    First off, full disclosure: This is going towards a uni assignment, so I don't want to receive code. :). I'm more looking for approaches; I'm very new to python, having read a book but not yet written any code. The entire task is to import the contents of a CSV file, create a decision tree from the contents of the CSV file (using the ID3 algorithm), and then parse a second CSV file to run against the tree. There's a big (understandable) preference to have it capable of dealing with different CSV files (I asked if we were allowed to hard code the column names, mostly to eliminate it as a possibility, and the answer was no). The CSV files are in a fairly standard format; the header row is marked with a # then the column names are displayed, and every row after that is a simple series of values. Example: # Column1, Column2, Column3, Column4 Value01, Value02, Value03, Value04 Value11, Value12, Value13, Value14 At the moment, I'm trying to work out the first part: parsing the CSV. To make the decisions for the decision tree, a dictionary structure seems like it's going to be the most logical; so I was thinking of doing something along these lines: Read in each line, character by character If the character is not a comma or a space Append character to temporary string If the character is a comma Append the temporary string to a list Empty string Once a line has been read Create a dictionary using the header row as the key (somehow!) Append that dictionary to a list However, if I do things that way, I'm not sure how to make a mapping between the keys and the values. I'm also wondering whether there is some way to perform an action on every dictionary in a list, since I'll need to be doing things to the effect of "Everyone return their values for columns Column1 and Column4, so I can count up who has what!" - I assume that there is some mechanism, but I don't think I know how to do it. Is a dictionary the best way to do it? Would I be better off doing things using some other data structure? If so, what?

    Read the article

  • My iPhone app ran fine in simulator but crashed on device (iPod touch 3.1.2) test, I got the followi

    - by Mickey Shine
    I was running myapp on an iPod touch and I noticed it missed some libraries. Is that the reason? [Session started at 2010-03-19 15:57:04 +0800.] GNU gdb 6.3.50-20050815 (Apple version gdb-1128) (Fri Dec 18 10:08:53 UTC 2009) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "--host=i386-apple-darwin --target=arm-apple-darwin".tty /dev/ttys007 Loading program into debugger… Program loaded. target remote-mobile /tmp/.XcodeGDBRemote-237-78 Switching to remote-macosx protocol mem 0x1000 0x3fffffff cache mem 0x40000000 0xffffffff none mem 0x00000000 0x0fff none run Running… [Switching to thread 11779] [Switching to thread 11779] sharedlibrary apply-load-rules all (gdb) continue warning: Unable to read symbols for "/Library/MobileSubstrate/MobileSubstrate.dylib" (file not found). 2010-03-19 15:57:18.892 myapp[2338:207] MS:Notice: Installing: com.yourcompany.myapp [myapp] (478.52) 2010-03-19 15:57:19.145 myapp[2338:207] MS:Notice: Loading: /Library/MobileSubstrate/DynamicLibraries/Backgrounder.dylib warning: Unable to read symbols for "/Library/MobileSubstrate/DynamicLibraries/Backgrounder.dylib" (file not found). warning: Unable to read symbols for "/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.1.2.sdk/usr/lib/libsubstrate.dylib" (file not found). MS:Warning: message not found [myappAppDelegate applicationWillResignActive:] MS:Warning: message not found [myappAppDelegate applicationDidBecomeActive:] 2010-03-19 15:57:19.550 myapp[2338:207] in FirstViewController 2010-03-19 15:57:20.344 myapp[2338:207] in load table view 2010-03-19 15:57:20.478 myapp[2338:207] in loading splash view 2010-03-19 15:57:22.793 myapp[2338:207] in set interface Program received signal: “0”. warning: check_safe_call: could not restore current frame

    Read the article

  • Reading bmp file for encrypting and decrypting txt file into it

    - by Shantanu Gupta
    I am trying to read a bmp file in C++(Turbo). But i m not able to print binary stream. I want to encode txt file into it and decrypt it. How can i do this. I read that bmp file header is of 54 byte. But how and where should i append txt file in bmp file. ? I know only Turbo C++, so it would be helpfull for me if u provide solution or suggestion related to topic for the same. int main() { ifstream fr; //reads ofstream fw; // wrrites to file char c; int random; clrscr(); char file[2][100]={"s.bmp","s.txt"}; fr.open(file[0],ios::binary);//file name, mode of open, here input mode i.e. read only if(!fr) cout<<"File can not be opened."; fw.open(file[1],ios::app);//file will be appended if(!fw) cout<<"File can not be opened"; while(!fr) cout<<fr.get(); // error should be here. but not able to find out what error is it fr.close(); fw.close(); getch(); } This code is running fine when i pass txt file in binary mode

    Read the article

  • NES Programming - Nametables?

    - by Jeffrey Kern
    Hello everyone, I'm wondering about how the NES displays its graphical muscle. I've researched stuff online and read through it, but I'm wondering about one last thing: Nametables. Basically, from what I've read, each 8x8 block in a NES nametable points to a location in the pattern table, which holds graphic memory. In addition, the nametable also has an attribute table which sets a certain color palette for each 16x16 block. They're linked up together like this: (assuming 16 8x8 blocks) Nametable, with A B C D = pointers to sprite data: ABBB CDCC DDDD DDDD Attribute table, with 1 2 3 = pointers to color palette data, with < referencing value to the left, ^ above, and ' to the left and above: 1<2< ^'^' 3<3< ^'^' So, in the example above, the blocks would be colored as so 1A 1B 2B 2B 1C 1D 2C 2C 3D 3D 3D 3D 3D 3D 3D 3D Now, if I have this on a fixed screen - it works great! Because the NES resolution is 256x240 pixels. Now, how do these tables get adjusted for scrolling? Because Nametable 0 can scroll into Nametable 1, and if you keep scrolling Nametable 0 will wrap around again. That I get. But what I don't get is how to scroll the attribute table wraps around as well. From what I've read online, the 16x16 blocks it assigns attributes for will cause color distortions on the edge tiles of the screen (as seen when you scroll left to right and vice-versa in SMB3). The concern I have is that I understand how to scroll the nametables, but how do you scroll the attribute table? For intsance, if I have a green block on the left side of the screen, moving the screen to right should in theory cause the tiles to the right to be green as well until they move more into frame, to which they'll revert to their normal colors.

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >