Search Results

Search found 69357 results on 2775 pages for 'data oriented design'.

Page 122/2775 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • How to improve designer and developer work flow?

    - by mbdev
    I work in a small startup with two front end developers and one designer. Currently the process starts with the designer sending a png file with the whole page design and assets if needed. My task as front end developer is to convert it to a HTML/CSS page. My work flow currently looks like this: Lay out the distinct parts using html elements. Style each element very roughly (floats, minimal fonts and padding) so I can modify it using inspection. Using Chrome Developer Tools (inspect) add/change css attributes while updating the css file. Refresh the page after X amount of changes Use Pixel Perfect to refine the design more. Sit with the designer to make last adjustments. Inferring the paddings, margins, font sizes using trial and error takes a lot of time and I feel the process could become more efficient but not sure how to improve it. Using PSD files is not an option since buying Photoshop for each developer is currently not considered. Design guide is also not available since design is still evolving and new features are introduced. Ideas for improving the process above and sharing how the process looks like in your company will be great.

    Read the article

  • Name for Osherove's modified singleton pattern?

    - by Kazark
    I'm pretty well sold on the "singletons are evil" line of thought. Nevertheless, there are limited occurrences when you want to limit the creation of an object. Roy Osherove advises, If you're planning to use a singleton in your design, separate the logic of the singleton class and the logic that makes it a singleton (the part that initializes a static variables, for example) into two separate classes. That way, you can keep the single responsibility principle (SRP) and also have a way to override singleton logic. (The Art of Unit Testing 261-262) This pattern still perpetuates the global state. However, it does result in a testable design, so it seems to me to be a good pattern for mitigating the damage of a singleton. However, Osherove does not give a name to this pattern; but naming a pattern, according to the Gang of Four, is important: Naming a pattern immediately increases our design vocabulary. It lets us design at a higher level of abstraction. (3) Is there a standard name for this pattern? It seems different enough from a standard singleton to deserve a separate name. Decoupled Singleton, perhaps?

    Read the article

  • Advice on choosing a book to read

    - by Kioshiki
    I would like to ask for some recommendations on useful books to read. Initially I had intended on posting quite a long description of my current issue and asking for advice. But I realised that I didn’t have a clear idea of what I wanted to ask. One thing that is clear to me is that my knowledge in various areas needs improving and reading is one method of doing that. Though choosing the right book to read seems like a task in itself when there are so many books out there. I am a programmer but I also deal with analysis, design & testing. So I am not sure what type of book to read. One option might be to work through two books at the same time. I had thought maybe one about design or practices and another of a more technical focus. Recently I came across one book that I thought might be useful to read: http://xunitpatterns.com/index.html It seems like an interesting book, but the comments I read on amazon.co.uk show that the book is probably longer than it needs to be. Has anyone read it and can comment on this? Another book that I already own and would probably be a good one to finish reading is this: http://www.amazon.co.uk/Code-Complete-Practical-Handbook-Construction/dp/0735619670/ref=sr_1_1?ie=UTF8&qid=1309438553&sr=8-1 Has anyone else read this who can comment on its usefulness? Beyond these two I currently have no clear idea of what to read. I have thought about reading a book related to OO design or the GOF design patterns. But I wonder if I am worrying too much about the process and practices and not focusing on the actual work. I would be very grateful for any suggestions or comments. Many Thanks, Kioshiki

    Read the article

  • Must all AI states be able to react to any event?

    - by Prog
    FSMs implemented with the State design pattern are a common way to design AI agents. I am familiar with the State design pattern and know how to implement it. How is this used in games to design AI agents? Consider a simplified class Monster, representing an AI agent: class Monster { State state; // other fields omitted public void update(){ // called every game-loop cycle state.execute(this); } public void setState(State state){ this.state = state; } // irrelevant stuff omitted } There are several State subclasses implementing execute() differently. So far, classic State pattern. AI agents are subject to environmental effects and other objects communicating with them. For example, an AI agent might tell another AI agent to attack (i.e. agent.attack()). Or a fireball might tell an AI agent to fall down. This means that the agent must have methods such as attack() and fallDown(), or commonly some message receiving mechanism to understand such messages. With an FSM, the current State of the agent should be the one taking care of such method calls - i.e. the agent delegates to the current state upon every event. Is this correct? If correct, how is this done? Are all states obligated by their superclass to implement methods such as attack(), fallDown() etc., so the agent can always delegate to them on almost every event? Or is it done in some other way?

    Read the article

  • JAVA image transfer problem

    - by user579098
    Hi, I have a school assignment, to send a jpg image,split it into groups of 100 bytes, corrupt it, use a CRC check to locate the errors and re-transmit until it eventually is built back into its original form. It's practically ready, however when I check out the new images, they appear with errors.. I would really appreciate if someone could look at my code below and maybe locate this logical mistake as I can't understand what the problem is because everything looks ok :S For the file with all the data needed including photos and error patterns one could download it from this link:http://rapidshare.com/#!download|932tl2|443122762|Data.zip|739 Thanks in advance, Stefan p.s dont forget to change the paths in the code for the image and error files package networks; import java.io.*; // for file reader import java.util.zip.CRC32; // CRC32 IEEE (Ethernet) public class Main { /** * Reads a whole file into an array of bytes. * @param file The file in question. * @return Array of bytes containing file data. * @throws IOException Message contains why it failed. */ public static byte[] readFileArray(File file) throws IOException { InputStream is = new FileInputStream(file); byte[] data=new byte[(int)file.length()]; is.read(data); is.close(); return data; } /** * Writes (or overwrites if exists) a file with data from an array of bytes. * @param file The file in question. * @param data Array of bytes containing the new file data. * @throws IOException Message contains why it failed. */ public static void writeFileArray(File file, byte[] data) throws IOException { OutputStream os = new FileOutputStream(file,false); os.write(data); os.close(); } /** * Converts a long value to an array of bytes. * @param data The target variable. * @return Byte array conversion of data. * @see http://www.daniweb.com/code/snippet216874.html */ public static byte[] toByta(long data) { return new byte[] { (byte)((data >> 56) & 0xff), (byte)((data >> 48) & 0xff), (byte)((data >> 40) & 0xff), (byte)((data >> 32) & 0xff), (byte)((data >> 24) & 0xff), (byte)((data >> 16) & 0xff), (byte)((data >> 8) & 0xff), (byte)((data >> 0) & 0xff), }; } /** * Converts a an array of bytes to long value. * @param data The target variable. * @return Long value conversion of data. * @see http://www.daniweb.com/code/snippet216874.html */ public static long toLong(byte[] data) { if (data == null || data.length != 8) return 0x0; return (long)( // (Below) convert to longs before shift because digits // are lost with ints beyond the 32-bit limit (long)(0xff & data[0]) << 56 | (long)(0xff & data[1]) << 48 | (long)(0xff & data[2]) << 40 | (long)(0xff & data[3]) << 32 | (long)(0xff & data[4]) << 24 | (long)(0xff & data[5]) << 16 | (long)(0xff & data[6]) << 8 | (long)(0xff & data[7]) << 0 ); } public static byte[] nextNoise(){ byte[] result=new byte[100]; // copy a frame's worth of data (or remaining data if it is less than frame length) int read=Math.min(err_data.length-err_pstn, 100); System.arraycopy(err_data, err_pstn, result, 0, read); // if read data is less than frame length, reset position and add remaining data if(read<100){ err_pstn=100-read; System.arraycopy(err_data, 0, result, read, err_pstn); }else // otherwise, increase position err_pstn+=100; // return noise segment return result; } /** * Given some original data, it is purposefully corrupted according to a * second data array (which is read from a file). In pseudocode: * corrupt = original xor corruptor * @param data The original data. * @return The new (corrupted) data. */ public static byte[] corruptData(byte[] data){ // get the next noise sequence byte[] noise = nextNoise(); // finally, xor data with noise and return result for(int i=0; i<100; i++)data[i]^=noise[i]; return data; } /** * Given an array of data, a packet is created. In pseudocode: * frame = corrupt(data) + crc(data) * @param data The original frame data. * @return The resulting frame data. */ public static byte[] buildFrame(byte[] data){ // pack = [data]+crc32([data]) byte[] hash = new byte[8]; // calculate crc32 of data and copy it to byte array CRC32 crc = new CRC32(); crc.update(data); hash=toByta(crc.getValue()); // create a byte array holding the final packet byte[] pack = new byte[data.length+hash.length]; // create the corrupted data byte[] crpt = new byte[data.length]; crpt = corruptData(data); // copy corrupted data into pack System.arraycopy(crpt, 0, pack, 0, crpt.length); // copy hash into pack System.arraycopy(hash, 0, pack, data.length, hash.length); // return pack return pack; } /** * Verifies frame contents. * @param frame The frame data (data+crc32). * @return True if frame is valid, false otherwise. */ public static boolean verifyFrame(byte[] frame){ // allocate hash and data variables byte[] hash=new byte[8]; byte[] data=new byte[frame.length-hash.length]; // read frame into hash and data variables System.arraycopy(frame, frame.length-hash.length, hash, 0, hash.length); System.arraycopy(frame, 0, data, 0, frame.length-hash.length); // get crc32 of data CRC32 crc = new CRC32(); crc.update(data); // compare crc32 of data with crc32 of frame return crc.getValue()==toLong(hash); } /** * Transfers a file through a channel in frames and reconstructs it into a new file. * @param jpg_file File name of target file to transfer. * @param err_file The channel noise file used to simulate corruption. * @param out_file The name of the newly-created file. * @throws IOException */ public static void transferFile(String jpg_file, String err_file, String out_file) throws IOException { // read file data into global variables jpg_data = readFileArray(new File(jpg_file)); err_data = readFileArray(new File(err_file)); err_pstn = 0; // variable that will hold the final (transfered) data byte[] out_data = new byte[jpg_data.length]; // holds the current frame data byte[] frame_orig = new byte[100]; byte[] frame_sent = new byte[100]; // send file in chunks (frames) of 100 bytes for(int i=0; i<Math.ceil(jpg_data.length/100); i++){ // copy jpg data into frame and init first-time switch System.arraycopy(jpg_data, i*100, frame_orig, 0, 100); boolean not_first=false; System.out.print("Packet #"+i+": "); // repeat getting same frame until frame crc matches with frame content do { if(not_first)System.out.print("F"); frame_sent=buildFrame(frame_orig); not_first=true; }while(!verifyFrame(frame_sent)); // usually, you'd constrain this by time to prevent infinite loops (in // case the channel is so wacked up it doesn't get a single packet right) // copy frame to image file System.out.println("S"); System.arraycopy(frame_sent, 0, out_data, i*100, 100); } System.out.println("\nDone."); writeFileArray(new File(out_file),out_data); } // global variables for file data and pointer public static byte[] jpg_data; public static byte[] err_data; public static int err_pstn=0; public static void main(String[] args) throws IOException { // list of jpg files String[] jpg_file={ "C:\\Users\\Stefan\\Desktop\\Data\\Images\\photo1.jpg", "C:\\Users\\Stefan\\Desktop\\Data\\Images\\photo2.jpg", "C:\\Users\\Stefan\\Desktop\\Data\\Images\\photo3.jpg", "C:\\Users\\Stefan\\Desktop\\Data\\Images\\photo4.jpg" }; // list of error patterns String[] err_file={ "C:\\Users\\Stefan\\Desktop\\Data\\Error Pattern\\Error Pattern 1.DAT", "C:\\Users\\Stefan\\Desktop\\Data\\Error Pattern\\Error Pattern 2.DAT", "C:\\Users\\Stefan\\Desktop\\Data\\Error Pattern\\Error Pattern 3.DAT", "C:\\Users\\Stefan\\Desktop\\Data\\Error Pattern\\Error Pattern 4.DAT" }; // loop through all jpg/channel combinations and run tests for(int x=0; x<jpg_file.length; x++){ for(int y=0; y<err_file.length; y++){ System.out.println("Transfering photo"+(x+1)+".jpg using Pattern "+(y+1)+"..."); transferFile(jpg_file[x],err_file[y],jpg_file[x].replace("photo","CH#"+y+"_photo")); } } } }

    Read the article

  • Business Objects - Containers or functional?

    - by Walter
    Where I work, we've gone back and forth on this subject a number of times and are looking for a sanity check. Here's the question: Should Business Objects be data containers (more like DTOs) or should they also contain logic that can perform some functionality on that object. Example - Take a customer object, it probably contains some common properties (Name, Id, etc), should that customer object also include functions (Save, Calc, etc.)? One line of reasoning says separate the object from the functionality (single responsibility principal) and put the functionality in a Business Logic layer or object. The other line of reasoning says, no, if I have a customer object I just want to call Customer.Save and be done with it. Why do I need to know about how to save a customer if I'm consuming the object? Our last two projects have had the objects separated from the functionality, but the debate has been raised again on a new project. Which makes more sense? EDIT These results are very similar to our debates. One vote to one side or another completely changes the direction. Does anyone else want to add their 2 cents? EDIT Eventhough the answer sampling is small, it appears that the majority believe that functionality in a business object is acceptable as long as it is simple but persistence is best placed in a separate class/layer. We'll give this a try. Thanks for everyone's input...

    Read the article

  • Model login constraints based on time

    - by DaDaDom
    Good morning, for an existing web application I need to implement "time based login constraints". It means that for each user, later maybe each group, I can define timeslots when they are (not) allowed to log in into the system. As all data for the application is stored in database tables, I need to somehow create a way to model this idea in that way. My first approach, I will try to explain it here: Create a tree of login constraints (called "timeslots") with the main "categories", like "workday", "weekend", "public holiday", etc. on the top level, which are in a "sorted" order (meaning "public holiday" has a higher priority than "weekday") for each top level node create subnodes, which have a finer timespan, like "monday", "tuesday", ... below that, create an "hour" level: 0, 1, 2, ..., 23. No further details are necessary. set every member to "allowed" by default For every member of the system create a 1:n relationship member:timeslots which defines constraints, e.g. a member A may have A:monday-forbidden and A:tuesday-forbidden Do a depth-first search at every login and check if the member has a constraint. Why a depth first search? Well, I thought that it may be that a member has the rules: A:monday->forbidden, A:monday-10->allowed, A:mondey-11->allowed So a login on monday at 12:30 would fail, but one at 10:30 succeed. For performance reasons I could break the relational database paradigm and set a flag for every entry in the member-to-timeslots-table which is set to true if the member has information set for "finer" timeslots, but that's a second step. Is this model in principle a good idea? Are there existing models? Thanks.

    Read the article

  • archiving strategies and limitations of data in a table

    - by Samuel
    Environment: Jboss, Mysql, JPA, Hibernate Our web application will be catering to a large amount of users (~ 1,000,000) and there are a lots of child table where user specific data are stored (e.g. personal, health, forum contributions ...). What would be the best practice to archive user & user specific information. [a] Would it be wise to move the archived user & user specific information to their respective tables within the same database (e.g. user_archive, user_forum_comments_archive ...) OR [b] Would you just mark the database entries with a flag in the original table(s) and just query only non archived entries. We have a unique constraint on User.loginid, how do you handle this requirement if the users are archived via 1-[a] (i.e if a user with loginid 'samuel' gets moved into the archive table and if a new user gets added with the same name in the original table, how would you prevent this. What would be the best strategy to address the unique key constraints. We have a requirement to selectively archive records and bring it back if necessary, will you rely on database tools are would you handle this via your persistence APIs exposed by the JPA entity model.

    Read the article

  • Binary Search Tree for specific intent

    - by Luís Guilherme
    We all know there are plenty of self-balancing binary search trees (BST), being the most famous the Red-Black and the AVL. It might be useful to take a look at AA-trees and scapegoat trees too. I want to do deletions insertions and searches, like any other BST. However, it will be common to delete all values in a given range, or deleting whole subtrees. So: I want to insert, search, remove values in O(log n) (balanced tree). I would like to delete a subtree, keeping the whole tree balanced, in O(log n) (worst-case or amortized) It might be useful to delete several values in a row, before balancing the tree I will most often insert 2 values at once, however this is not a rule (just a tip in case there is a tree data structure that takes this into account) Is there a variant of AVL or RB that helps me on this? Scapegoat-trees look more like this, but would also need some changes, anyone who has got experience on them can share some thougts? More precisely, which balancing procedure and/or removal procedure would help me keep this actions time-efficient?

    Read the article

  • Looking for a .NET data access layer

    - by Luke101
    Hello, I am looking for a data access layer for ado.net. I am not interested in linq, EF, NHibernate or any other ORM. Currently, I am using the data access layer from umbraco. The DAL is very good but they stopped developing it so i am looking for a different one. Does anyone know where I can find a list of DALs that I can test?

    Read the article

  • iPhone: Save boolean into Core Data

    - by Nic Hubbard
    I have set up one of my core data attributes as a Boolean. Now, I need to set it, but XCode keeps telling me that it may not respond to setUseGPS. [ride setUseGPS: useGPS.on]; What is the method for setting a boolean in core data? All my other attributes are set this way, and they work great. So, not sure why a boolean does not work to be set this way?

    Read the article

  • I keep on getting "save operation failure" after any change on my XCode Data Model

    - by Philip Schoch
    I started using Core Data for iPhone development. I started out by creating a very simple entity (called Evaluation) with just one string property (called evaluationTopic). I had following code for inserting a fresh string: - (void)insertNewObject { // Create a new instance of the entity managed by the fetched results controller. NSManagedObjectContext *context = [fetchedResultsController managedObjectContext]; NSEntityDescription *entity = [[fetchedResultsController fetchRequest] entity]; NSManagedObject *newManagedObject = [NSEntityDescription insertNewObjectForEntityForName:[entity name] inManagedObjectContext:context]; // If appropriate, configure the new managed object. [newManagedObject setValue:@"My Repeating String" forKey:@"evaluationTopic"]; // Save the context. NSError *error; if (![context save:&error]) { // Handle the error... } [self.tableView reloadData]; } This worked perfectly fine and by pushing the +button a new "My Repeating String" would be added to the table view and be in persistent store. I then pressed "Design - Add Model Version" in XCode. I added three entities to the existing entity and also added new properties to the existing "Evaluation" entity. Then, I created new files off the entities by pressing "File - New File - Managed Object Classes" and created a new .h and .m file for my four entities, including the "Evaluation" entity with Evaluation.h and Evaluation.m. Now I changed the model version by setting "Design - Data Model - Set Current Version". After having done all this, I changed my insertMethod: - (void)insertNewObject { // Create a new instance of the entity managed by the fetched results controller. NSManagedObjectContext *context = [fetchedResultsController managedObjectContext]; NSEntityDescription *entity = [[fetchedResultsController fetchRequest] entity]; Evaluation *evaluation = (Evaluation *) [NSEntityDescription insertNewObjectForEntityForName:[entity name] inManagedObjectContext:context]; // If appropriate, configure the new managed object. [evaluation setValue:@"My even new string" forKey:@"evaluationSpeechTopic"]; // Save the context. NSError *error; if (![context save:&error]) { // Handle the error... } [self.tableView reloadData]; } This does not work though! Every time I want to add a row the simulator crashes and I get the following: "NSInternalInconsistencyException', reason: 'This NSPersistentStoreCoordinator has no persistent stores. It cannot perform a save operation.'" I had this error before I knew about creating new version after changing anything on the datamodel, but why is this still coming up? Do I need to do any mapping (even though I just added entities and properties that did not exist before?). In the Apple Dev tutorial it sounds very easy but I have been struggling with this for long time, never worked after changing model version.

    Read the article

  • Adding "System.Data.SQLite" as refernce.

    - by subash
    hi all, when i was building my project done in asp.net & c#.net ,it produced the error as "The type or namespace name 'SQLite' does not exist in the namespace 'System.Data' (are you missing an assembly reference?" so, when i tried to add it as reference ,i was not able to find "System.Data.SQLite" in my library . how to overcome this problem.?

    Read the article

  • Stream data with Node.js

    - by Saif Bechan
    I want to know if it is possible to stream data from the server to the client with Node.js. From what I can understand all over the internet is that this has to be possible, yet I fail to find a correct example or solution. What I want is a single http post to node.js with AJAX. Than leave the connection open and continuously stream data to the client. The client will receive this stream and update the page continuously.

    Read the article

  • Freely available dictionary data for Chinese, Japanese, CJK characters

    - by devio
    I am developing an online CJK character dictionary application, and already found the following databases: Unicode Unihan Database Jim Breen's JMDict and KanjiDic CEDict HanDeDict As I am looking for more data, web searches often lead me to online dictionaries, but not the data itself, using the same sources over again. If you know of any CJK-relevant downloadable dictionaries, please add them.

    Read the article

  • Selenium Data Driven Testing with C#

    - by Dinesh Kanojia
    HI All, i am Dinesh Kanojia and i am new in the Automate testing and i want to perform data driven testing in selenium using ASP.NET(C#),ajax and almost all the features of jquery so any one can give me the step how to perform data driven testing using c# or some demo through which i can perform my testing thanks in advance, warm regards, Dinesh.K

    Read the article

  • SAS: rearrange field order in data step

    - by Dan
    In SAS 9, how can I in a simple data step, rearrange the order the field. Data set2; /*Something probably goes here*/ set set1; run; So if set1 has the following fields: Name Title Salary A Chief 40000 B Chief 45000 Then I can change the field order of set2 to: Title Salary Name Chief 40000 A Chief 45000 B Thanks, Dan

    Read the article

  • core data missing records iphone

    - by Sridhar
    Hello, I have a strange and serious problem. When I am working with core data (not saving or editing or anything) just accessing the data from entity. Sometime strangely a few records or all records are missing(deleting) from the entity when my application restarts. I checked them by opening the SQLite database. Can anyone have the same problem ? Thanks, Raghu

    Read the article

  • Jquery Toggle & passing data

    - by Ross
    I have created a toggle button with jquery but I need to pass my $id so it can execute the mysql in toggle_visibility.php. how do I pass my variable from my tag ? Yes $("a.toggleVisibility").toggle( function () { $(this).html("No"); }, function () { $.ajax({ type: "POST", url: "toggle_visibility.php", data: "id", success: function(msg){ alert( "Data Saved: " + msg ); } }); $(this).html("Yes"); } );

    Read the article

  • Data Mining - Predictive Analysis

    - by IMHO
    We are looking at acquiring Data Mining software to primarily run predictive analysis processes. How does SQL Server Data Mining solution compares to other solutions like SPSS from IBM? Since SQL Server DM is included in SQL Server Enterprise license - what would be the justification to spend extra couple 100K to buy separate software just to do DM?

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >