Search Results

Search found 16735 results on 670 pages for 'christopher shen mu long'.

Page 157/670 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • Need to call original function from detoured function

    - by peachykeen
    I'm using Detours to hook into an executable's message function, but I need to run my own code and then call the original code. From what I've seen in the Detours docs, it definitely sounds like that should happen automatically. The original function prints a message to the screen, but as soon as I attach a detour it starts running my code and stops printing. The original function code is roughly: void CGuiObject::AppendMsgToBuffer(classA, unsigned long, unsigned long, int, classB); My function is: void CGuiObject_AppendMsgToBuffer( [same params, with names] ); I know the memory position the original function resides in, so using: DWORD OrigPos = 0x0040592C; DetourAttach( (void*)OrigPos, CGuiObject_AppendMsgToBuffer); gets me into the function. This code works almost perfectly: my function is called with the proper parameters. However, execution leaves my function and the original code is not called. I've tried jmping back in, but that crashes the program (I'm assuming the code Detours moved to fit the hook is responsible for the crash). Edit: I've managed to fix the first issue, with no returning to program execution. By calling the OrigPos value as a function, I'm able to go to the "trampoline" function and from there on to the original code. However, somewhere along the lines the registers are changing and that is causing the program to crash with a segfault as soon as I get back into the original code.

    Read the article

  • How do I left join tables in unidirectional many-to-one in Hibernate?

    - by jbarz
    I'm piggy-backing off of http://stackoverflow.com/questions/2368195/how-to-join-tables-in-unidirectional-many-to-one-condition. If you have two classes: class A { @Id public Long id; } class B { @Id public Long id; @ManyToOne @JoinColumn(name = "parent_id", referencedColumnName = "id") public A parent; } B - A is a many to one relationship. I understand that I could add a Collection of Bs to A however I do not want that association. So my actual question is, Is there an HQL or Criteria way of creating the SQL query: select * from A left join B on (b.parent_id = a.id) This will retrieve all A records with a Cartesian product of each B record that references A and will include A records that have no B referencing them. If you use: from A a, B b where b.a = a then it is an inner join and you do not receive the A records that do not have a B referencing them. I have not found a good way of doing this without two queries so anything less than that would be great. Thanks.

    Read the article

  • C# TcpClient, getting back the entire response from a telnet device

    - by Dan Bailiff
    I'm writing a configuration tool for a device that can communicate via telnet. The tool sends a command via TcpClient.GetStream().Write(...), and then checks for the device response via TcpClient.GetStream().ReadByte(). This works fine in unit tests or when I'm stepping through code slowly. If I run the config tool such that it performs multiple commands consecutively, then the behavior of the read is inconsistent. By inconsistent I mean sometimes the data is missing, incomplete or partially garbled. So even though the device performed the operation and responded, my code to check for the response was unreliable. I "fixed" this problem by adding a Thread.Sleep to make sure the read method waits long enough for the device to complete its job and send back the whole response (in this case 1.5 seconds was a safe amount). I feel like this is wrong, blocking everything for a fixed time, and I wonder if there is a better way to get the read method to wait only long enough to get the whole response from the device. private string Read() { if (!tcpClient.Connected) throw (new Exception("Read() failed: Telnet connection not available.")); StringBuilder sb = new StringBuilder(); do { ParseTelnet(ref sb); System.Threading.Thread.Sleep(1500); } while (tcpClient.Available > 0); return sb.ToString(); } private void ParseTelnet(ref StringBuilder sb) { while (tcpClient.Available > 0) { int input = tcpClient.GetStream().ReadByte(); switch (input) { // parse the input // ... do other things in special cases default: sb.Append((char)input); break; } } }

    Read the article

  • [C#][XNA 3.1] How can I host two different XNA windows inside one Windows Form?

    - by secutos
    I am making a Map Editor for a 2D tile-based game. I would like to host two XNA controls inside the Windows Form - the first to render the map; the second to render the tileset. I used the code here to make the XNA control host inside the Windows Form. This all works very well - as long as there is only one XNA control inside the Windows Form. But I need two - one for the map; the second for the tileset. How can I run two XNA controls inside the Windows Form? While googling, I came across the terms "swap chain" and "multiple viewports", but I can't understand them and would appreciate support. Just as a side note, I know the XNA control example was designed so that even if you ran 100 XNA controls, they would all share the same GraphicsDevice - essentially, all 100 XNA controls would share the same screen. I tried modifying the code to instantiate a new GraphicsDevice for each XNA control, but the rest of the code doesn't work. The code is a bit long to post, so I won't post it unless someone needs it to be able to help me. Thanks in advance.

    Read the article

  • How can I apply a style to existing tikz node on specific slides

    - by Eugene Pimenov
    This is what I'm trying to do \begin{tikzpicture} [node distance = 1cm, auto,font=\footnotesize, % STYLES every node/.style={node distance=1.3cm}, comment/.style={rectangle, inner sep= 5pt, text width=4cm, node distance=0.25cm, font=}, module/.style={rectangle, drop shadow, draw, fill=black!10, inner sep=5pt, text width=3cm, text badly centered, minimum height=0.8cm, font=\bfseries\footnotesize\sffamily,rounded corners}, selected/.style={fill=red!40}] \node [module] (nodeA) {node A}; \node [module, below of=nodeA] (nodeA) {node B}; \only<1>{ \node [comment, text width=6cm, right=0.25 of nodeA] {short description of Node A}; \node [comment, text width=6cm, right=0.25 of nodeB] {short description of Node B}; } \only<2>{ \node [selected] (nodeA) {}; \node [comment, text width=6cm, right=0.25 of nodeA] {long description of node A}; } \only<3>{ \node [selected] (nodeB) {}; \node [comment, text width=6cm, right=0.25 of nodeA] {long description of node B}; } \end{tikzpicture} The problem is \node [selected] (nodeB) {}; creates a new node, but I want it to apply the style for the existing node. Is there any way to do so? Of course I could have copies of every node in selected state and not-selected state, but I really want to have a normal solution.

    Read the article

  • What's the standard convention for creating a new NSArray from an existing NSArray?

    - by Prairiedogg
    Let's say I have an NSArray of NSDictionaries that is 10 elements long. I want to create a second NSArray with the values for a single key on each dictionary. The best way I can figure to do this is: NSMutableArray *nameArray = [[NSMutableArray alloc] initWithCapacity:[array count]]; for (NSDictionary *p in array) { [nameArray addObject:[p objectForKey:@"name"]]; } self.my_new_array = array; [array release]; [nameArray release]; } But in theory, I should be able to get away with not using a mutable array and using a counter in conjunction with [nameArray addObjectAtIndex:count], because the new list should be exactly as long as the old list. Please note that I am NOT trying to filter for a subset of the original array, but make a new array with exactly the same number of elements, just with values dredged up from the some arbitrary attribute of each element in the array. In python one could solve this problem like this: new_list = [p['name'] for p in old_list] or if you were a masochist, like this: new_list = map(lambda p: p['name'], old_list) Having to be slightly more explicit in objective-c makes me wonder if there is an accepted common way of handling these situations.

    Read the article

  • Internal "Tee" setup

    - by RadlyEel
    I have inherited some really old VC6.0 code that I am upgrading to VS2008 for building a 64-bit app. One required feature that was implemented long, long ago is overriding std::cout so its output goes simultaneously to a console window and to a file. The implementation depended on the then-current VC98 library implementation of ostream and, of course, is now irretrievably broken with VS2008. It would be reasonable to accumulate all the output until program termination time and then dump it to a file. I got part of the way home by using freopen(), setvbuf(), and ios::sync_with_stdio(), but to my dismay, the internal library does not treat its buffer as a ring buffer; instead when it flushes to the output device it restarts at the beginning, so every flush wipes out all my accumulated output. Converting to a more standard logging function is not desirable, as there are over 1600 usages of "std::cout << " scattered throughout almost 60 files. I have considered overriding ostream's operator<< function, but I'm not sure if that will cover me, since there are global operator<< functions that can't be overridden. (Or can they?) Any ideas on how to accomplish this?

    Read the article

  • BN_hex2bn magically segfaults in openSSL

    - by xunil154
    Greetings, this is my first post on stackoverflow, and i'm sorry if its a bit long. I'm trying to build a handshake protocol for my own project and am having issues with the server converting the clients RSA's public key to a Bignum. It works in my clent code, but the server segfaults when attempting to convert the hex value of the clients public RSA to a bignum. I have already checked that there is no garbidge before or after the RSA data, and have looked online, but i'm stuck. header segment: typedef struct KEYS { RSA *serv; char* serv_pub; int pub_size; RSA *clnt; } KEYS; KEYS keys; Initializing function: // Generates and validates the servers key /* code for generating server RSA left out, it's working */ //Set client exponent keys.clnt = 0; keys.clnt = RSA_new(); BN_dec2bn(&keys.clnt->e, RSA_E_S); // RSA_E_S contains the public exponent Problem code (in Network::server_handshake): // *Recieved an encrypted message from the network and decrypt into 'buffer' (1024 byte long)* cout << "Assigning clients RSA" << endl; // I have verified that 'buffer' contains the proper key if (BN_hex2bn(&keys.clnt->n, buffer) < 0) { Error("ERROR reading server RSA"); } cout << "clients RSA has been assigned" << endl; The program segfaults at BN_hex2bn(&keys.clnt->n, buffer) with the error (valgrind output) Invalid read of size 8 at 0x50DBF9F: BN_hex2bn (in /usr/lib/libcrypto.so.0.9.8) by 0x40F23E: Network::server_handshake() (Network.cpp:177) by 0x40EF42: Network::startNet() (Network.cpp:126) by 0x403C38: main (server.cpp:51) Address 0x20 is not stack'd, malloc'd or (recently) free'd Process terminating with default action of signal 11 (SIGSEGV) Access not within mapped region at address 0x20 at 0x50DBF9F: BN_hex2bn (in /usr/lib/libcrypto.so.0.9.8) And I don't know why it is, Im using the exact same code in the client program, and it works just fine. Any input is greatly appriciated!

    Read the article

  • FILE* issue PPU side code

    - by Cristina
    We are working on a homework on CELL programming for college and their feedback response to our questions is kinda slow, thought i can get some faster answers here. I have a PPU side code which tries to open a file passed down through char* argv[], however this doesn't work it cannot make the assignment of the pointer, i get a NULL. Now my first idea was that the file isn't in the correct directory and i copied in every possible and logical place, my second idea is that maybe the PPU wants this pointer in its LS area, but i can't deduce if that's the bug or not. So... My question is what am i doing wrong? I am working with a Fedora 7 SDK Cell, with Eclipse as an IDE. Maybe my argument setup is wrong tho he gets the name of the file correctly. Code on request: images_t *read_bin_data(char *name) { FILE *file; images_t *img; uint32_t *buffer; uint8_t buf; unsigned long fileLen; unsigned long i; //Open file file = (FILE*)malloc(sizeof(FILE)); file = fopen(name, "rb"); printf("[Debug]Opening file %s\n",name); if (!file) { fprintf(stderr, "Unable to open file %s", name); return NULL; } //....... } Main launch: int main(int argc,char* argv[]) { int i,img_width; int modif_this[4] __attribute__ ((aligned(16))) = {1,2,3,4}; images_t *faces, *nonfaces; spe_context_ptr_t ctxs[SPU_THREADS]; pthread_t threads[SPU_THREADS]; thread_arg_t arg[SPU_THREADS]; //intializare img_width img_width = atoi(argv[1]); printf("[Debug]Img size is %i\n",img_width); faces = read_bin_data(argv[3]); //....... } Thanks for the help.

    Read the article

  • split line of text

    - by plys
    Hi all, I was wondering if there is an algorithm to split a line into multiple lines, so that the resulting set of multiple lines fit into a square shape rather than a rectangle. Let me give some examples, Input: Hi this is a really long line. Output: Hi this is a really long line Input: a b c d e f Output: a b c d e f Input: This is really such looooooooooooooooooooong line.This is the end. Output: This is really such looooooooooooooooooooong line This is the end. If you see in the above examples, input line fits into a wide rectangle. But the output more or less fits into a square shape. Essentially what needs to be done here is simply count the number of characters in the line, take the square root of that number. Then put square root number of characters in each line. But in the above example, the splitting needs to be done by respecting word wraps instead of characters. Is there any standard algorithm for this? Any code examples/ pointers would be appreciated!

    Read the article

  • Issue in setting alarm time in AlarmManager Class

    - by Anshuman
    I have used the following code in setting alarm time in AlarmManager class. Now Suppose my device current date 9-july-2012 11:31:00, Now suppose i set set a alarm at 9-july-2012 11:45:00, then it works fine and pop-up an alarm at that time. But if i set an alarm at 10-aug-2012 11:40:00, then as soon as exit the app the alarm pop-up, which is wrong because i set an alarm at month of august, So why this happen, is anything wrong in my code. if anyone knows help me to solve this out. Code For Setting Alarm time in AlarmManager class Intent myIntent = new Intent(context, AlarmService.class); PendingIntent pendingIntent = PendingIntent.getService(context, i, myIntent, i); AlarmManager alarmManager = (AlarmManager)context.getSystemService(AlarmService.ALARM_SERVICE); Calendar calendar = Calendar.getInstance(); calendar.setTimeInMillis(System.currentTimeMillis()); calendar.add(Calendar.MILLISECOND, (int) dateDifferenceFromSystemTime(NoteManager.getSingletonObject().getAlarmTime(i))); alarmManager.set(AlarmManager.RTC_WAKEUP, calendar.getTimeInMillis(), pendingIntent); public static long dateDifferenceFromSystemTime(Date date) { long difference = 0; try { Calendar c = Calendar.getInstance(); difference = date.getTime() - c.getTimeInMillis(); if (difference < 0) { // if difference is -1 - means alarm time is of previous time then current // then firstly change it to +positive and subtract form 86400000 to get exact new time to play alarm // 86400000-Total no of milliseconds of 24hr Day difference = difference * -1; difference = 86400000 - difference; } } catch (Exception e) { e.printStackTrace(); } return difference; } Service class which pop-up alarm when matches time public class AlarmService extends IntentService { public void onCreate() { super.onCreate(); } public AlarmService() { super("MyAlarmService"); } @Override public int onStartCommand(Intent intent, int flags, int startId) { super.onStartCommand(intent, startId, startId); return START_STICKY; } @Override protected void onHandleIntent(Intent intent) { startActivity(new Intent(this,AlarmDialogActivity.class).setFlags(Intent.FLAG_ACTIVITY_NEW_TASK)); } }

    Read the article

  • Automatic tracking algorithm

    - by nico
    Hi everyone, I'm trying to write a simple tracking routine to track some points on a movie. Essentially I have a series of 100-frames-long movies, showing some bright spots on dark background. I have ~100-150 spots per frame, and they move over the course of the movie. I would like to track them, so I'm looking for some efficient (but possibly not overkilling to implement) routine to do that. A few more infos: the spots are a few (es. 5x5) pixels in size the movement are not big. A spot generally does not move more than 5-10 pixels from its original position. The movements are generally smooth. the "shape" of these spots is generally fixed, they don't grow or shrink BUT they become less bright as the movie progresses. the spots don't move in a particular direction. They can move right and then left and then right again the user will select a region around each spot and then this region will be tracked, so I do not need to automatically find the points. As the videos are b/w, I though I should rely on brigthness. For instance I thought I could move around the region and calculate the correlation of the region's area in the previous frame with that in the various positions in the next frame. I understand that this is a quite naïve solution, but do you think it may work? Does anyone know specific algorithms that do this? It doesn't need to be superfast, as long as it is accurate I'm happy. Thank you nico

    Read the article

  • Python and Unicode: How everything should be Unicode

    - by A A
    Forgive if this a long a question: I have been programming in Python for around six months. Self taught, starting with the Python tutorial and then SO and then just using Google for stuff. Here is the sad part: No one told me all strings should be Unicode. No, I am not lying or making this up, but where does the tutorial mention it? And most examples also I see just make use of byte strings, instead of Unicode strings. I was just browsing and came across this question on SO, which says how every string in Python should be a Unicode string. This pretty much made me cry! I read that every string in Python 3.0 is Unicode by default, so my questions are for 2.x: Should I do a: print u'Some text' or just print 'Text' ? Everything should be Unicode, does this mean, like say I have a tuple: t = ('First', 'Second'), it should be t = (u'First', u'Second')? I read that I can do a from __future__ import unicode_literals and then every string will be a Unicode string, but should I do this inside a container also? When reading/ writing to a file, I should use the codecs module. Right? Or should I just use the standard way or reading/ writing and encode or decode where required? If I get the string from say raw_input(), should I convert that to Unicode also? What is the common approach to handling all of the above issues in 2.x? The from __future__ import unicode_literals statement? Sorry for being a such a noob, but this changes what I have been doing for a long time and so clearly I am confused.

    Read the article

  • MySQL Gurus: How to pull a complex grid of data from MySQL database with one query?

    - by iopener
    Hopefully this is less complex than I think. I have one table of companies, and another table of jobs, and a third table with that contains a single entry for each employee in each job from each company. NOTE: Some companies won't have employees in some jobs, and some companies will have more than one employee in some jobs. The company table has a companyid and companyname field, the job table has a jobid and jobtitle field, and the employee table has employeeid, companyid, jobid and employeename fields. I want to build a table like this: +-----------+-----------+-----------+ | Company A | Company B | Company C | ------+-----------+-----------+-----------+ Job A | Emp 1 | Emp 2 | | ------+-----------+-----------+-----------+ Job B | Emp 3 | | Emp 4 | | | | Emp 5 | ------+-----------+-----------+-----------+ Job C | | Emp 6 | | | | Emp 7 | | | | Emp 8 | | ------+-----------+-----------+-----------+ I had previously been looping through a result set of jobs, and for each job, looping through a result set of each company, and for each company, looping through each employee and printing it in a table (gross, but performance was not supposed to be a consideration). The app has grown in popularity, and now we have 100 companies and hundreds of jobs, and the server is crapping out (all the id fields are indexed). Any suggestions on how to write a single query to get this data? I don't need the company names or job titles (obviously), but I do need some way to identify where each row from the result should be printed. I'm imagining a result set that just contained a long list of joined employees, and I could write a loop to use the companyid and employeeid values to tell me when to create a new cell or table row. This works as long as there aren't ZERO employees; I would need a NULL employee name for that I think? Am I completely on the wrong track? Thanks in advance for any ideas!

    Read the article

  • NetworkStream.Read delay .Net

    - by Gilbes
    I have a class that inherits from TcpClient. In that class I have a method to process responses. In that method I call I get the NetworkStream with MyBase.GetStream and call Read on it. This works fine, excpet the first call to read blocks too long. And by too long I mean that the socket has recieved plenty of data, but won't read it until some arbitrary limit is reached. I can see that it has recieved plenty of data using the packet sniffer WireShark. I have set the recieve buffer to small amounts, and very small amounts (like just a few bytes) to no avail. I have done the same with the buffer byte array I pass to the read method, and it still delays. Or to put it another way. I am download 600k. The download takes 5 seconds (at a little over 100k/second connection to the server which makes sense). The initial Read call takes 2-3 seconds and tells me only 256 bytes are availble (256 is the Recieve buffer and the size of the array I read in to). Then magically, the other few hundred thousand bytes can be read in 256 byte chunks in only a few process ticks each. Using a packet sniffer, I know that during those initial 2-3 seconds, the socket has recieved much more than just 256 bytes. My connection wasn't .25k/second for 3 seconds and then 400k for 2 seconds. How do I get the bytes from a socket as they come in?

    Read the article

  • Why would I be seeing execution timeouts when setting $_SESSION values?

    - by Kev
    I'm seeing the following errors in my PHP error logs: PHP Fatal error: Maximum execution time of 60 seconds exceeded in D:\sites\s105504\www\index.php on line 3 PHP Fatal error: Maximum execution time of 60 seconds exceeded in D:\sites\s105504\www\search.php on line 4 The lines in question are: index.php: 01 <?php 02 session_start(); 03 ob_start(); 04 error_reporting(E_All); 05 $_SESSION['nav'] = "range"; // <-- Error generated here search.php 01 <?php 02 03 session_start(); 04 $_SESSION['nav'] = "range"; 05 $_SESSION['navselected'] = 21; // <-- Error generated here Would it really take as long as 60+ seconds to assign a $_SESSION[] value? The platform is: Windows 2003 SP2 + IIS6 FastCGI for Windows 2003 (original RTM build) PHP 5.2.6 Non thread-safe build There aren't any issues with session data files being cleared up on the server as sessions expire. The oldest sess_XXXXXXXXXXXXXX file I'm seeing is around 2 hours old. There are no disk timeouts evident in the event logs or other such disk health issues that might suggest difficulty creating session data files. The site is also on a server that isn't under heavy load. The site is busy but not being hammered and is very responsive. It's just that we get these errors, three or four in a row, every three or four hours. I should also add that I'm not the original developer of this code and that it belongs to a customer who's developer has long since departed.

    Read the article

  • Technique to remove common words(and their plural versions) from a string

    - by Jake M
    I am attempting to find tags(keywords) for a recipe by parsing a long string of text. The text contains the recipe ingredients, directions and a short blurb. What do you think would be the most efficient way to remove common words from the tag list? By common words, I mean words like: 'the', 'at', 'there', 'their' etc. I have 2 methodologies I can use, which do you think is more efficient in terms of speed and do you know of a more efficient way I could do this? Methodology 1: - Determine the number of times each word occurs(using the library Collections) - Have a list of common words and remove all 'Common Words' from the Collection object by attempting to delete that key from the Collection object if it exists. - Therefore the speed will be determined by the length of the variable delims import collections from Counter delim = ['there','there\'s','theres','they','they\'re'] # the above will end up being a really long list! word_freq = Counter(recipe_str.lower().split()) for delim in set(delims): del word_freq[delim] return freq.most_common() Methodology 2: - For common words that can be plural, look at each word in the recipe string, and check if it partially contains the non-plural version of a common word. Eg; For the string "There's a test" check each word to see if it contains "there" and delete it if it does. delim = ['this','at','them'] # words that cant be plural partial_delim = ['there','they',] # words that could occur in many forms word_freq = Counter(recipe_str.lower().split()) for delim in set(delims): del word_freq[delim] # really slow for delim in set(partial_delims): for word in word_freq: if word.find(delim) != -1: del word_freq[delim] return freq.most_common()

    Read the article

  • Targeted Simplify in Mathematica

    - by Timo
    I generate very long and complex analytic expressions of the general form: (...something not so complex...)(...ditto...)(...ditto...)...lots... When I try to use Simplify, Mathematica grinds to a halt, I am assuming due to the fact that it tries to expand the brackets and or simplify across different brackets. The brackets, while containing long expressions, are easily simplified by Mathematica on their own. Is there some way I can limit the scope of Simplify to a single bracket at a time? Edit: Some additional info and progress. So using the advice from you guys I have now started using something in the vein of In[1]:= trouble = Log[(x + I y) (x - I y) + Sqrt[(a + I b) (a - I b)]]; In[2]:= Replace[trouble, form_ /; (Head[form] == Times) :> Simplify[form],{3}] Out[2]= Log[Sqrt[a^2 + b^2] + (x - I y) (x + I y)] Changing Times to an appropriate head like Plus or Power makes it possible to target the simplification quite accurately. The problem / question that remains, though, is the following: Simplify will still descend deeper than the level specified to Replace, e.g. In[3]:= Replace[trouble, form_ /; (Head[form] == Plus) :> Simplify[form], {1}] Out[3]= Log[Sqrt[a^2 + b^2] + x^2 + y^2] simplifies the square root as well. My plan was to iteratively use Replace from the bottom up one level at a time, but this clearly will result in vast amount of repeated work by Simplify and ultimately result in the exact same bogging down of Mathematica I experienced in the outset. Is there a way to restrict Simplify to a certain level(s)? I realize that this sort of restriction may not produce optimal results, but the idea here is getting something that is "good enough".

    Read the article

  • How to interrupt a thread performing a blocking socket connect?

    - by Jason R
    I have some code that spawns a pthread that attempts to maintain a socket connection to a remote host. If the connection is ever lost, it attempts to reconnect using a blocking connect() call on its socket. Since the code runs in a separate thread, I don't really care about the fact that it uses the synchronous socket API. That is, until it comes time for my application to exit. I would like to perform some semblance of an orderly shutdown, so I use thread synchronization primitives to wake up the thread and signal for it to exit, then perform a pthread_join() on the thread to wait for it to complete. This works great, unless the thread is in the middle of a connect() call when I command the shutdown. In that case, I have to wait for the connect to time out, which could be a long time. This makes the application appear to take a long time to shut down. What I would like to do is to interrupt the call to connect() in some way. After the call returns, the thread will notice my exit signal and shut down cleanly. Since connect() is a system call, I thought that I might be able to intentionally interrupt it using a signal (thus making the call return EINTR), but I'm not sure if this is a robust method in a POSIX threads environment. Does anyone have any recommendations on how to do this, either using signals or via some other method? As a note, the connect() call is down in some library code that I cannot modify, so changing to a non-blocking socket is not an option.

    Read the article

  • Why NOT use POST method here?

    - by Camran
    I have a classifieds website. In the main page (index) I have several form fields which the user may or may not fill in, in order to specify a detailed search of classifieds. Ex: Category: Cars Price from: 3000 Price to: 10000 Color: Red Area: California The forms' action is set to a php page: <form action='query_sql.php' method='post'> In query_sql.php I fetch the variables like this: category=$_POST['category']; etc etc... Then query MySql: $query="SELECT........WHERE category='$category' etc etc.... $results = mysql_query($query); Then I simply display the results of the query to the user by creating a table which is filled in dynamically depending on the results set. However, according to an answer by Col. Shrapnel in my previous Q I shouldn't use POST here: http://stackoverflow.com/questions/3004754/how-to-hide-url-from-users-when-submitting-this-form The reason I use post is simply to hide the "one-page-word-document" long URL in the browsers adress bar. I am very confused, is it okay to use POST or not? It is working fine both when I use GET or POST now... And it is already on a production server... Btw, in the linked question, I wasn't referring to make URL invisible (or hide it) I just wanted it too look better (which I have accomplished with mod_rewrite). UPDATE: If I use GET, then how should I make the url better looking (beautiful)? Check this previous Q out: http://stackoverflow.com/questions/3000524/how-to-make-this-very-long-url-appear-short

    Read the article

  • Best strategy for synching data in iPhone app

    - by iamj4de
    I am working on a regular iPhone app which pulls data from a server (XML, JSON, etc...), and I'm wondering what is the best way to implement synching data. Criteria are speed (less network data exchange), robustness (data recovery in case update fails), offline access and flexibility (adaptable when the structure of the database changes slightly, like a new column). I know it varies from app to app, but can you guys share some of your strategy/experience? For me, I'm thinking of something like this: 1) Store Last Modified Date in iPhone 2) Upon launching, send a message like getNewData.php?lastModifiedDate=... 3) Server will process and send back only modified data from last time. 4) This data is formatted as so: <+><data id="..."></data></+> // add this to SQLite/CoreData <-><data id="..."></data></-> // remove this <%><data id="..."><attribute>newValue</attribute></data></%> // new modified value I don't want to make <+, <-, <%... for each attribute as well, because it would be too complicated, so probably when receive a <% field, I would just remove the data with the specified id and then add it again (assuming id here is not some automatically auto-incremented field). 5) Once everything is downloaded and updated, I will update the Last Modified Date field. The main problem with this strategy is: If the network goes down when I am updating something = the Last Modified Date is not yet updated = next time I relaunch the app, I will have to go through the same thing again. Not to mention potential inconsistent data. If I use a temporary table for update and make the whole thing atomic, it would work, but then again, if the update is too long (lots of data change), the user has to wait a long time until new data is available. Should I use Last-Modified-Date for each of the data field and update data gradually?

    Read the article

  • Does complex JOINs causes high coupling and maintenance problems ?

    - by ashkan.kh.nazary
    Our project has ~40 tables with complex relations.A colleague believes in using long join queries which enforces me to learn about tables outside of my module but I think I should not concern about tables not directly related to my module and use data access functions (written by those responsible for other modules) when I need data from them. Let me clarify: I am responsible for the ContactVendor module which enables the customers to contact the vendor and start a conversation about some specific product. Products module has it's own complex tables and relations with functions that encapsulate details (for example i18n, activation, product availability etc ...). Now I need to show the product title of some product related to some conversation between the vendor and customers. I may either write a long query that retrieves the product info along with conversation stuff in one shot (which enforces me to learn about Product tables) OR I may pass the relevant product_id to the get_product_info(int) function. First approach is obviously demanding and introduces many bad practices and things I normally consider fault in programming. The problem with the second approach seems to be the countless mini queries these access functions cause and performance loss is a concern when a loop tries to fetch product titles for 100 products using functions that each perform a separate query. So I'm stuck between "don't code to the implementation, code to interface" and performance. What is the right way of doing things ? UPDATE: I'm specially concerned about possible future modifications to those tables outside of my module. What if the Products module decided to change the way they are doing things? or for some reason modify the schema? It means some other modules would break or malfunction until the change is integrated to them. The usual ripple effect problem.

    Read the article

  • JUnit: checking if a void method gets called

    - by nkr1pt
    I have a very simple filewatcher class which checks every 2 seconds if a file has changed and if so, the onChange method (void) is called. Is there an easy way to check ik the onChange method is getting called in a unit test? code: public class PropertyFileWatcher extends TimerTask { private long timeStamp; private File file; public PropertyFileWatcher(File file) { this.file = file; this.timeStamp = file.lastModified(); } public final void run() { long timeStamp = file.lastModified(); if (this.timeStamp != timeStamp) { this.timeStamp = timeStamp; onChange(file); } } protected void onChange(File file) { System.out.println("Property file has changed"); } } @Test public void testPropertyFileWatcher() throws Exception { File file = new File("testfile"); file.createNewFile(); PropertyFileWatcher propertyFileWatcher = new PropertyFileWatcher(file); Timer timer = new Timer(); timer.schedule(propertyFileWatcher, 2000); FileWriter fw = new FileWriter(file); fw.write("blah"); fw.close(); Thread.sleep(8000); // check if propertyFileWatcher.onChange was called file.delete(); }

    Read the article

  • Looking for fast, minimal, preferrably free disc cloning software [closed]

    - by Dave
    We have to test our application installation and functionality on many Windows operating system versions and languages (XP, Vista, Win7; English, Spanish, Portuguese, etc; 32-bit & b4-bit.) While we can do much of this in virtual machines, we have noticed that VM's sometimes hide problems, or raise false bugs. So, we need to do "bare metal" OS installation for much of our testing. I have been using Acronis True Image for the past year, and am not impressed. It often gives random errors which require a reboot, and is really slow. For example, when trying to restore an image, it goes through a "Locking partition" cycle about three times (once after you click OK on each step of the wizard), each of which can take 5 minutes to complete. This all happens BEFORE it actually starts the image copy, which is sometimes quick (3-5 minutes), sometimes long (hours). The size of all of our images are roughly the same, so that is not related. So, anyway, I'm looking to switch to something else: I only need very basic functionality--just creating images of entire discs, and then restoring those images onto the exact same hard drive at a later date. That's it. I'm not opposed to paying for a good piece of software, but if there is something free out there that does the job well, that would be a preference. My OS on which the imaging software would run is Windows Vista, but a bootable media (into a Linux flavor) would be fine also, as long as its quick to use and reliable. Recommendations? (Also, moderators, if this should be a CW, I'll be happy to mark it as such; unclear about the rules there.)

    Read the article

  • When does an ARM7 processor increase its PC register?

    - by Summer_More_More_Tea
    Hi everyone: I'm thinking about this question for a time: when does an ARM7(with 3 pipelines) processor increase its PC register. I originally thought that after an instruction has been executed, the processor first check is there any exception in the last execution, then increase PC by 2 or 4 depending on current state. If an exception occur, ARM7 will change its running mode, store PC in the LR of current mode and begin to process current exception without modifying the PC register. But it make no sense when analyzing returning instructions. I can not work out why PC will be assigned LR when returning from an undefined-instruction-exception while LR-4 from prefetch-abort-exception, don't both of these exceptions happened at the decoding state? What's more, according to my textbook, PC will always be assigned LR-4 when returning from prefetch-abort-exception no matter what state the processor is(ARM or Thumb) before exception occurs. However, I think PC should be assigned LR-2 if the original state is Thumb, since a Thumb-instruction is 2 bytes long instead of 4 bytes which an ARM-instruction holds, and we just wanna roll-back an instruction in current state. Is there any flaws in my reasoning or something wrong with the textbook. Seems a long question. I really hope anyone can help me get the right answer. Thanks in advance.

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >