Search Results

Search found 17336 results on 694 pages for 'richard long'.

Page 180/694 | < Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >

  • Criteria API - How to get records based on collection count?

    - by Cosmo
    Hello Guys! I have a Question class in ActiveRecord with following fields: [ActiveRecord("`Question`")] public class Question : ObcykaniDb<Question> { private long id; private IList<Question> relatedQuestions; [PrimaryKey("`Id`")] private long Id { get { return this.id; } set { this.id = value; } } [HasAndBelongsToMany(typeof(Question), ColumnRef = "ChildId", ColumnKey = "ParentId", Table = "RelatedQuestion")] private IList<Question> RelatedQuestions { get { return this.relatedQuestions; } set { this.relatedQuestions = value; } } } How do I write a DetachedCriteria query to get all Questions that have at least 5 related questions (count) in the RelatedQuestions collection? For now this gives me strange results: DetachedCriteria dCriteria = DetachedCriteria.For<Question>() .CreateCriteria("RelatedQuestions") .SetProjection(Projections.Count("Id")) .Add(Restrictions.EqProperty(Projections.Id(), "alias.Id")); DetachedCriteria dc = DetachedCriteria.For<Question>("alias").Add(Subqueries.Le(5, dCriteria)); IList<Question> results = Question.FindAll(dc); Any ideas what I'm doing wrong?

    Read the article

  • Anonymous union definition/declaration in a macro GNU vs VS2008

    - by Alan_m
    I am attempting to alter an IAR specific header file for a lpc2138 so it can compile with Visual Studio 2008 (to enable compatible unit testing). My problem involves converting register definitions to be hardware independent (not at a memory address) The "IAR-safe macro" is: #define __IO_REG32_BIT(NAME, ADDRESS, ATTRIBUTE, BIT_STRUCT) \ volatile __no_init ATTRIBUTE union \ { \ unsigned long NAME; \ BIT_STRUCT NAME ## _bit; \ } @ ADDRESS //declaration //(where __gpio0_bits is a structure that names //each of the 32 bits as P0_0, P0_1, etc) __IO_REG32_BIT(IO0PIN,0xE0028000,__READ_WRITE,__gpio0_bits); //usage IO0PIN = 0x0xAA55AA55; IO0PIN_bit.P0_5 = 0; This is my comparable "hardware independent" code: #define __IO_REG32_BIT(NAME, BIT_STRUCT)\ volatile union \ { \ unsigned long NAME; \ BIT_STRUCT NAME##_bit; \ } NAME; //declaration __IO_REG32_BIT(IO0PIN,__gpio0_bits); //usage IO0PIN.IO0PIN = 0xAA55AA55; IO0PIN.IO0PIN_bit.P0_5 = 1; This compiles and works but quite obviously my "hardware independent" usage does not match the "IAR-safe" usage. How do I alter my macro so I can use IO0PIN the same way I do in IAR? I feel this is a simple anonymous union matter but multiple attempts and variants have proven unsuccessful. Maybe the IAR GNU compiler supports anonymous unions and vs2008 does not. Thank you.

    Read the article

  • What is the Difference between GC.GetTotalMemory(false) and GC.GetTotalMemory(true)

    - by somaraj
    Hi, Could some one tell me the difference between GC.GetTotalMemory(false) and GC.GetTotalMemory(true); I have a small program and when i compared the results the first loop gives an put put < loop count 0 Diff = 32 for GC.GetTotalMemory(true); and < loop count 0 Diff = 0 for GC.GetTotalMemory(false); but shouldnt it be the otherway ? Smilarly rest of the loops prints some numbers ,which are different for both case. what does this number indicate .why is it changing as the loop increase. struct Address { public string Streat; } class Details { public string Name ; public Address address = new Address(); } class emp :IDisposable { public Details objb = new Details(); bool disposed = false; #region IDisposable Members public void Dispose() { Disposing(true); } void Disposing(bool disposing) { if (!disposed) disposed = disposing; objb = null; GC.SuppressFinalize(this); } #endregion } class Program { static void Main(string[] args) { long size1 = GC.GetTotalMemory(false); emp empobj = null; for (int i = 0; i < 200;i++ ) { // using (empobj = new emp()) //------- (1) { empobj = new emp(); //------- (2) empobj.objb.Name = "ssssssssssssssssss"; empobj.objb.address.Streat = "asdfasdfasdfasdf"; } long size2 = GC.GetTotalMemory(false); Console.WriteLine( "loop count " +i + " Diff = " +(size2-size1)); } } } }

    Read the article

  • How best to implement "favourites" feature? (like favourite products on a data driven website)

    - by ClarkeyBoy
    Hi, I have written a dynamic database driven, object oriented website with an administration frontend etc etc. I would like to add a feature where customers can save items as "favourites", without having to create an account and login, to come back to them later, but I dont know how exactly to go about doing this... I see three options: Log favourites based on IP address and then change these to be logged against an account if the customer then creates an account; Force customers to create an account to be able to use this functionality; Log favourites based on IP address but give users the option to save their favourites under a name they specify. The problem with option 1 is that I dont know much about IP addresses - my Dad thinks they are unique, but I know people have had problems with systems like this. The problem with 1 and 2 is that accounts have not been opened up to customers yet - only administrators can log in at the moment. It should be easy to alter this (no more than a morning or afternoons work) but I would also have to implement usergroups too. The problem with option 3 is that if user A saves a favourites list called "My Favourites", and then user B tries to save a list under this name and it is refused, user B will then be able to access the list saved by user A because they now know it already exists. A solution to this is to password protect lists, but to go to all this effort I may as well implement option 2. Of course I could always use option 4; use an alternative if anyone can suggest a better solution than any of the above options. So has anyone ever done something like this before? If so how did you go about it? What do you recommend (or not recommend)? Many thanks in advance, Regards, Richard

    Read the article

  • INSERT INTO sql server error : invalid object name

    - by thormayer
    I have a problem with some statement on SQL SERVER the error I get is that I have an invalid object name 'TBL_VIDEOS' INSERT INTO TBL_VIDEOS ( TBL_VIDEOS.ID, TBL_VIDEOS.TITLE, TBL_VIDEOS.V_DESCRIPTION, TBL_VIDEOS.UPLOAD_DATE, TBL_VIDEOS.V_VIEWS, TBL_VIDEOS.USERNAME, TBL_VIDEOS.RATING, TBL_VIDEOS.V_SOURCE, TBL_VIDEOS.FLAG ) VALUES ('Z8MTRH3LmTVm', 'Why Creativity is the New Economy', 'Dr Richard Florida, one of the world&#39;s leading experts on economic competitiveness, demographic trends and cultural and technological innovation shows how developing the full human and creative capabilities of each individual, combined with institutional supports such as commercial innovation and new industry, will put us back on the path to economic and social prosperity. Listen to the podcast of the full event including audience Q&amp;A: http://www.thersa.org/events/audio-and-past-events/2012/why-creativity-is-the-new-economy Our events are made possible with the support of our Fellowship. Support us by donating or applying to become a Fellow. Donate: http://www.thersa.org/support-the-rsa Become a Fellow: http://www.thersa.org/fellowship/apply', CURRENT_TIMESTAMP, 0, 1, 0, 'http://www.youtube.com/watch?v=VPX7gowr2vE&feature=g-all-u' ,0) and I wonder what i've done wrong ? (btw, the error refer to line 1.. guess its the table name.. but it correct!

    Read the article

  • "Unable to open file", when the program tries to open file in /proc

    - by tristartom
    Hi, I try to read file /proc/'pid'/status, using c program. The code is as follows, and even I use sudo to run it, the prompt still keeps throwing "Unable to open file". Please let me know if you have any ideas on how to fix this. thanks Richard ... int main (int argc, char* argv[]) { string line; char* fileLoc; if(argc != 2) { cout << "a.out file_path" << endl; fileLoc = "/proc/net/dev"; } else { sprintf(fileLoc, "/proc/%d/status", atoi(argv[1])); } cout<< fileLoc << endl; ifstream myfile (fileLoc); if (myfile.is_open()) { while (! myfile.eof() ) { getline (myfile,line); cout << line << endl; } myfile.close(); } else cout << "Unable to open file"; return 0; }

    Read the article

  • [C#][XNA 3.1] How can I host two different XNA windows inside one Windows Form?

    - by secutos
    I am making a Map Editor for a 2D tile-based game. I would like to host two XNA controls inside the Windows Form - the first to render the map; the second to render the tileset. I used the code here to make the XNA control host inside the Windows Form. This all works very well - as long as there is only one XNA control inside the Windows Form. But I need two - one for the map; the second for the tileset. How can I run two XNA controls inside the Windows Form? While googling, I came across the terms "swap chain" and "multiple viewports", but I can't understand them and would appreciate support. Just as a side note, I know the XNA control example was designed so that even if you ran 100 XNA controls, they would all share the same GraphicsDevice - essentially, all 100 XNA controls would share the same screen. I tried modifying the code to instantiate a new GraphicsDevice for each XNA control, but the rest of the code doesn't work. The code is a bit long to post, so I won't post it unless someone needs it to be able to help me. Thanks in advance.

    Read the article

  • C# TcpClient, getting back the entire response from a telnet device

    - by Dan Bailiff
    I'm writing a configuration tool for a device that can communicate via telnet. The tool sends a command via TcpClient.GetStream().Write(...), and then checks for the device response via TcpClient.GetStream().ReadByte(). This works fine in unit tests or when I'm stepping through code slowly. If I run the config tool such that it performs multiple commands consecutively, then the behavior of the read is inconsistent. By inconsistent I mean sometimes the data is missing, incomplete or partially garbled. So even though the device performed the operation and responded, my code to check for the response was unreliable. I "fixed" this problem by adding a Thread.Sleep to make sure the read method waits long enough for the device to complete its job and send back the whole response (in this case 1.5 seconds was a safe amount). I feel like this is wrong, blocking everything for a fixed time, and I wonder if there is a better way to get the read method to wait only long enough to get the whole response from the device. private string Read() { if (!tcpClient.Connected) throw (new Exception("Read() failed: Telnet connection not available.")); StringBuilder sb = new StringBuilder(); do { ParseTelnet(ref sb); System.Threading.Thread.Sleep(1500); } while (tcpClient.Available > 0); return sb.ToString(); } private void ParseTelnet(ref StringBuilder sb) { while (tcpClient.Available > 0) { int input = tcpClient.GetStream().ReadByte(); switch (input) { // parse the input // ... do other things in special cases default: sb.Append((char)input); break; } } }

    Read the article

  • Need to call original function from detoured function

    - by peachykeen
    I'm using Detours to hook into an executable's message function, but I need to run my own code and then call the original code. From what I've seen in the Detours docs, it definitely sounds like that should happen automatically. The original function prints a message to the screen, but as soon as I attach a detour it starts running my code and stops printing. The original function code is roughly: void CGuiObject::AppendMsgToBuffer(classA, unsigned long, unsigned long, int, classB); My function is: void CGuiObject_AppendMsgToBuffer( [same params, with names] ); I know the memory position the original function resides in, so using: DWORD OrigPos = 0x0040592C; DetourAttach( (void*)OrigPos, CGuiObject_AppendMsgToBuffer); gets me into the function. This code works almost perfectly: my function is called with the proper parameters. However, execution leaves my function and the original code is not called. I've tried jmping back in, but that crashes the program (I'm assuming the code Detours moved to fit the hook is responsible for the crash). Edit: I've managed to fix the first issue, with no returning to program execution. By calling the OrigPos value as a function, I'm able to go to the "trampoline" function and from there on to the original code. However, somewhere along the lines the registers are changing and that is causing the program to crash with a segfault as soon as I get back into the original code.

    Read the article

  • MYSQL not running on Ubuntu OS - Error 2002.

    - by mgj
    Hi, I am a novice to mysql DB. I am trying to run the MYSQL Server on Ubuntu 10.04. Through Synaptic Package Manager I am have installed the mysql version: mysql-client-5.1 I wonder that how was the database password set for the mysql-client software that I installed through the above way.It would be nice if you could enlighten me on this. When I tried running this database, I encountered the error given below: mohnish@mohnish-laptop:/var/lib$ mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) mohnish@mohnish-laptop:/var/lib$ I referred to a similar question posted by another user. I didn't find a solution through the proposed answers. For instance when I tried the solutions posted for the similar question I got the following: mohnish@mohnish-laptop:/var/lib$ service start mysqld start: unrecognized service mohnish@mohnish-laptop:/var/lib$ ps -u mysql ERROR: User name does not exist. ********* simple selection ********* ********* selection by list ********* -A all processes -C by command name -N negate selection -G by real group ID (supports names) -a all w/ tty except session leaders -U by real user ID (supports names) -d all except session leaders -g by session OR by effective group name -e all processes -p by process ID T all processes on this terminal -s processes in the sessions given a all w/ tty, including other users -t by tty g OBSOLETE -- DO NOT USE -u by effective user ID (supports names) r only running processes U processes for specified users x processes w/o controlling ttys t by tty *********** output format ********** *********** long options *********** -o,o user-defined -f full --Group --User --pid --cols --ppid -j,j job control s signal --group --user --sid --rows --info -O,O preloaded -o v virtual memory --cumulative --format --deselect -l,l long u user-oriented --sort --tty --forest --version -F extra full X registers --heading --no-heading --context ********* misc options ********* -V,V show version L list format codes f ASCII art forest -m,m,-L,-T,H threads S children in sum -y change -l format -M,Z security data c true command name -c scheduling class -w,w wide output n numeric WCHAN,UID -H process hierarchy mohnish@mohnish-laptop:/var/lib$ which mysql /usr/bin/mysql mohnish@mohnish-laptop:/var/lib$ mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) I even tried referring to http://forums.mysql.com/read.php?11,27769,84713#msg-84713 but couldn't find anything useful. Please let me know how I could tackle this error. Thank you very much..

    Read the article

  • How can I apply a style to existing tikz node on specific slides

    - by Eugene Pimenov
    This is what I'm trying to do \begin{tikzpicture} [node distance = 1cm, auto,font=\footnotesize, % STYLES every node/.style={node distance=1.3cm}, comment/.style={rectangle, inner sep= 5pt, text width=4cm, node distance=0.25cm, font=}, module/.style={rectangle, drop shadow, draw, fill=black!10, inner sep=5pt, text width=3cm, text badly centered, minimum height=0.8cm, font=\bfseries\footnotesize\sffamily,rounded corners}, selected/.style={fill=red!40}] \node [module] (nodeA) {node A}; \node [module, below of=nodeA] (nodeA) {node B}; \only<1>{ \node [comment, text width=6cm, right=0.25 of nodeA] {short description of Node A}; \node [comment, text width=6cm, right=0.25 of nodeB] {short description of Node B}; } \only<2>{ \node [selected] (nodeA) {}; \node [comment, text width=6cm, right=0.25 of nodeA] {long description of node A}; } \only<3>{ \node [selected] (nodeB) {}; \node [comment, text width=6cm, right=0.25 of nodeA] {long description of node B}; } \end{tikzpicture} The problem is \node [selected] (nodeB) {}; creates a new node, but I want it to apply the style for the existing node. Is there any way to do so? Of course I could have copies of every node in selected state and not-selected state, but I really want to have a normal solution.

    Read the article

  • BN_hex2bn magically segfaults in openSSL

    - by xunil154
    Greetings, this is my first post on stackoverflow, and i'm sorry if its a bit long. I'm trying to build a handshake protocol for my own project and am having issues with the server converting the clients RSA's public key to a Bignum. It works in my clent code, but the server segfaults when attempting to convert the hex value of the clients public RSA to a bignum. I have already checked that there is no garbidge before or after the RSA data, and have looked online, but i'm stuck. header segment: typedef struct KEYS { RSA *serv; char* serv_pub; int pub_size; RSA *clnt; } KEYS; KEYS keys; Initializing function: // Generates and validates the servers key /* code for generating server RSA left out, it's working */ //Set client exponent keys.clnt = 0; keys.clnt = RSA_new(); BN_dec2bn(&keys.clnt->e, RSA_E_S); // RSA_E_S contains the public exponent Problem code (in Network::server_handshake): // *Recieved an encrypted message from the network and decrypt into 'buffer' (1024 byte long)* cout << "Assigning clients RSA" << endl; // I have verified that 'buffer' contains the proper key if (BN_hex2bn(&keys.clnt->n, buffer) < 0) { Error("ERROR reading server RSA"); } cout << "clients RSA has been assigned" << endl; The program segfaults at BN_hex2bn(&keys.clnt->n, buffer) with the error (valgrind output) Invalid read of size 8 at 0x50DBF9F: BN_hex2bn (in /usr/lib/libcrypto.so.0.9.8) by 0x40F23E: Network::server_handshake() (Network.cpp:177) by 0x40EF42: Network::startNet() (Network.cpp:126) by 0x403C38: main (server.cpp:51) Address 0x20 is not stack'd, malloc'd or (recently) free'd Process terminating with default action of signal 11 (SIGSEGV) Access not within mapped region at address 0x20 at 0x50DBF9F: BN_hex2bn (in /usr/lib/libcrypto.so.0.9.8) And I don't know why it is, Im using the exact same code in the client program, and it works just fine. Any input is greatly appriciated!

    Read the article

  • How do I left join tables in unidirectional many-to-one in Hibernate?

    - by jbarz
    I'm piggy-backing off of http://stackoverflow.com/questions/2368195/how-to-join-tables-in-unidirectional-many-to-one-condition. If you have two classes: class A { @Id public Long id; } class B { @Id public Long id; @ManyToOne @JoinColumn(name = "parent_id", referencedColumnName = "id") public A parent; } B - A is a many to one relationship. I understand that I could add a Collection of Bs to A however I do not want that association. So my actual question is, Is there an HQL or Criteria way of creating the SQL query: select * from A left join B on (b.parent_id = a.id) This will retrieve all A records with a Cartesian product of each B record that references A and will include A records that have no B referencing them. If you use: from A a, B b where b.a = a then it is an inner join and you do not receive the A records that do not have a B referencing them. I have not found a good way of doing this without two queries so anything less than that would be great. Thanks.

    Read the article

  • Automatic tracking algorithm

    - by nico
    Hi everyone, I'm trying to write a simple tracking routine to track some points on a movie. Essentially I have a series of 100-frames-long movies, showing some bright spots on dark background. I have ~100-150 spots per frame, and they move over the course of the movie. I would like to track them, so I'm looking for some efficient (but possibly not overkilling to implement) routine to do that. A few more infos: the spots are a few (es. 5x5) pixels in size the movement are not big. A spot generally does not move more than 5-10 pixels from its original position. The movements are generally smooth. the "shape" of these spots is generally fixed, they don't grow or shrink BUT they become less bright as the movie progresses. the spots don't move in a particular direction. They can move right and then left and then right again the user will select a region around each spot and then this region will be tracked, so I do not need to automatically find the points. As the videos are b/w, I though I should rely on brigthness. For instance I thought I could move around the region and calculate the correlation of the region's area in the previous frame with that in the various positions in the next frame. I understand that this is a quite naïve solution, but do you think it may work? Does anyone know specific algorithms that do this? It doesn't need to be superfast, as long as it is accurate I'm happy. Thank you nico

    Read the article

  • Internal "Tee" setup

    - by RadlyEel
    I have inherited some really old VC6.0 code that I am upgrading to VS2008 for building a 64-bit app. One required feature that was implemented long, long ago is overriding std::cout so its output goes simultaneously to a console window and to a file. The implementation depended on the then-current VC98 library implementation of ostream and, of course, is now irretrievably broken with VS2008. It would be reasonable to accumulate all the output until program termination time and then dump it to a file. I got part of the way home by using freopen(), setvbuf(), and ios::sync_with_stdio(), but to my dismay, the internal library does not treat its buffer as a ring buffer; instead when it flushes to the output device it restarts at the beginning, so every flush wipes out all my accumulated output. Converting to a more standard logging function is not desirable, as there are over 1600 usages of "std::cout << " scattered throughout almost 60 files. I have considered overriding ostream's operator<< function, but I'm not sure if that will cover me, since there are global operator<< functions that can't be overridden. (Or can they?) Any ideas on how to accomplish this?

    Read the article

  • What's the standard convention for creating a new NSArray from an existing NSArray?

    - by Prairiedogg
    Let's say I have an NSArray of NSDictionaries that is 10 elements long. I want to create a second NSArray with the values for a single key on each dictionary. The best way I can figure to do this is: NSMutableArray *nameArray = [[NSMutableArray alloc] initWithCapacity:[array count]]; for (NSDictionary *p in array) { [nameArray addObject:[p objectForKey:@"name"]]; } self.my_new_array = array; [array release]; [nameArray release]; } But in theory, I should be able to get away with not using a mutable array and using a counter in conjunction with [nameArray addObjectAtIndex:count], because the new list should be exactly as long as the old list. Please note that I am NOT trying to filter for a subset of the original array, but make a new array with exactly the same number of elements, just with values dredged up from the some arbitrary attribute of each element in the array. In python one could solve this problem like this: new_list = [p['name'] for p in old_list] or if you were a masochist, like this: new_list = map(lambda p: p['name'], old_list) Having to be slightly more explicit in objective-c makes me wonder if there is an accepted common way of handling these situations.

    Read the article

  • split line of text

    - by plys
    Hi all, I was wondering if there is an algorithm to split a line into multiple lines, so that the resulting set of multiple lines fit into a square shape rather than a rectangle. Let me give some examples, Input: Hi this is a really long line. Output: Hi this is a really long line Input: a b c d e f Output: a b c d e f Input: This is really such looooooooooooooooooooong line.This is the end. Output: This is really such looooooooooooooooooooong line This is the end. If you see in the above examples, input line fits into a wide rectangle. But the output more or less fits into a square shape. Essentially what needs to be done here is simply count the number of characters in the line, take the square root of that number. Then put square root number of characters in each line. But in the above example, the splitting needs to be done by respecting word wraps instead of characters. Is there any standard algorithm for this? Any code examples/ pointers would be appreciated!

    Read the article

  • Targeted Simplify in Mathematica

    - by Timo
    I generate very long and complex analytic expressions of the general form: (...something not so complex...)(...ditto...)(...ditto...)...lots... When I try to use Simplify, Mathematica grinds to a halt, I am assuming due to the fact that it tries to expand the brackets and or simplify across different brackets. The brackets, while containing long expressions, are easily simplified by Mathematica on their own. Is there some way I can limit the scope of Simplify to a single bracket at a time? Edit: Some additional info and progress. So using the advice from you guys I have now started using something in the vein of In[1]:= trouble = Log[(x + I y) (x - I y) + Sqrt[(a + I b) (a - I b)]]; In[2]:= Replace[trouble, form_ /; (Head[form] == Times) :> Simplify[form],{3}] Out[2]= Log[Sqrt[a^2 + b^2] + (x - I y) (x + I y)] Changing Times to an appropriate head like Plus or Power makes it possible to target the simplification quite accurately. The problem / question that remains, though, is the following: Simplify will still descend deeper than the level specified to Replace, e.g. In[3]:= Replace[trouble, form_ /; (Head[form] == Plus) :> Simplify[form], {1}] Out[3]= Log[Sqrt[a^2 + b^2] + x^2 + y^2] simplifies the square root as well. My plan was to iteratively use Replace from the bottom up one level at a time, but this clearly will result in vast amount of repeated work by Simplify and ultimately result in the exact same bogging down of Mathematica I experienced in the outset. Is there a way to restrict Simplify to a certain level(s)? I realize that this sort of restriction may not produce optimal results, but the idea here is getting something that is "good enough".

    Read the article

  • Python and Unicode: How everything should be Unicode

    - by A A
    Forgive if this a long a question: I have been programming in Python for around six months. Self taught, starting with the Python tutorial and then SO and then just using Google for stuff. Here is the sad part: No one told me all strings should be Unicode. No, I am not lying or making this up, but where does the tutorial mention it? And most examples also I see just make use of byte strings, instead of Unicode strings. I was just browsing and came across this question on SO, which says how every string in Python should be a Unicode string. This pretty much made me cry! I read that every string in Python 3.0 is Unicode by default, so my questions are for 2.x: Should I do a: print u'Some text' or just print 'Text' ? Everything should be Unicode, does this mean, like say I have a tuple: t = ('First', 'Second'), it should be t = (u'First', u'Second')? I read that I can do a from __future__ import unicode_literals and then every string will be a Unicode string, but should I do this inside a container also? When reading/ writing to a file, I should use the codecs module. Right? Or should I just use the standard way or reading/ writing and encode or decode where required? If I get the string from say raw_input(), should I convert that to Unicode also? What is the common approach to handling all of the above issues in 2.x? The from __future__ import unicode_literals statement? Sorry for being a such a noob, but this changes what I have been doing for a long time and so clearly I am confused.

    Read the article

  • FILE* issue PPU side code

    - by Cristina
    We are working on a homework on CELL programming for college and their feedback response to our questions is kinda slow, thought i can get some faster answers here. I have a PPU side code which tries to open a file passed down through char* argv[], however this doesn't work it cannot make the assignment of the pointer, i get a NULL. Now my first idea was that the file isn't in the correct directory and i copied in every possible and logical place, my second idea is that maybe the PPU wants this pointer in its LS area, but i can't deduce if that's the bug or not. So... My question is what am i doing wrong? I am working with a Fedora 7 SDK Cell, with Eclipse as an IDE. Maybe my argument setup is wrong tho he gets the name of the file correctly. Code on request: images_t *read_bin_data(char *name) { FILE *file; images_t *img; uint32_t *buffer; uint8_t buf; unsigned long fileLen; unsigned long i; //Open file file = (FILE*)malloc(sizeof(FILE)); file = fopen(name, "rb"); printf("[Debug]Opening file %s\n",name); if (!file) { fprintf(stderr, "Unable to open file %s", name); return NULL; } //....... } Main launch: int main(int argc,char* argv[]) { int i,img_width; int modif_this[4] __attribute__ ((aligned(16))) = {1,2,3,4}; images_t *faces, *nonfaces; spe_context_ptr_t ctxs[SPU_THREADS]; pthread_t threads[SPU_THREADS]; thread_arg_t arg[SPU_THREADS]; //intializare img_width img_width = atoi(argv[1]); printf("[Debug]Img size is %i\n",img_width); faces = read_bin_data(argv[3]); //....... } Thanks for the help.

    Read the article

  • MySQL Gurus: How to pull a complex grid of data from MySQL database with one query?

    - by iopener
    Hopefully this is less complex than I think. I have one table of companies, and another table of jobs, and a third table with that contains a single entry for each employee in each job from each company. NOTE: Some companies won't have employees in some jobs, and some companies will have more than one employee in some jobs. The company table has a companyid and companyname field, the job table has a jobid and jobtitle field, and the employee table has employeeid, companyid, jobid and employeename fields. I want to build a table like this: +-----------+-----------+-----------+ | Company A | Company B | Company C | ------+-----------+-----------+-----------+ Job A | Emp 1 | Emp 2 | | ------+-----------+-----------+-----------+ Job B | Emp 3 | | Emp 4 | | | | Emp 5 | ------+-----------+-----------+-----------+ Job C | | Emp 6 | | | | Emp 7 | | | | Emp 8 | | ------+-----------+-----------+-----------+ I had previously been looping through a result set of jobs, and for each job, looping through a result set of each company, and for each company, looping through each employee and printing it in a table (gross, but performance was not supposed to be a consideration). The app has grown in popularity, and now we have 100 companies and hundreds of jobs, and the server is crapping out (all the id fields are indexed). Any suggestions on how to write a single query to get this data? I don't need the company names or job titles (obviously), but I do need some way to identify where each row from the result should be printed. I'm imagining a result set that just contained a long list of joined employees, and I could write a loop to use the companyid and employeeid values to tell me when to create a new cell or table row. This works as long as there aren't ZERO employees; I would need a NULL employee name for that I think? Am I completely on the wrong track? Thanks in advance for any ideas!

    Read the article

  • NetworkStream.Read delay .Net

    - by Gilbes
    I have a class that inherits from TcpClient. In that class I have a method to process responses. In that method I call I get the NetworkStream with MyBase.GetStream and call Read on it. This works fine, excpet the first call to read blocks too long. And by too long I mean that the socket has recieved plenty of data, but won't read it until some arbitrary limit is reached. I can see that it has recieved plenty of data using the packet sniffer WireShark. I have set the recieve buffer to small amounts, and very small amounts (like just a few bytes) to no avail. I have done the same with the buffer byte array I pass to the read method, and it still delays. Or to put it another way. I am download 600k. The download takes 5 seconds (at a little over 100k/second connection to the server which makes sense). The initial Read call takes 2-3 seconds and tells me only 256 bytes are availble (256 is the Recieve buffer and the size of the array I read in to). Then magically, the other few hundred thousand bytes can be read in 256 byte chunks in only a few process ticks each. Using a packet sniffer, I know that during those initial 2-3 seconds, the socket has recieved much more than just 256 bytes. My connection wasn't .25k/second for 3 seconds and then 400k for 2 seconds. How do I get the bytes from a socket as they come in?

    Read the article

  • Issue in setting alarm time in AlarmManager Class

    - by Anshuman
    I have used the following code in setting alarm time in AlarmManager class. Now Suppose my device current date 9-july-2012 11:31:00, Now suppose i set set a alarm at 9-july-2012 11:45:00, then it works fine and pop-up an alarm at that time. But if i set an alarm at 10-aug-2012 11:40:00, then as soon as exit the app the alarm pop-up, which is wrong because i set an alarm at month of august, So why this happen, is anything wrong in my code. if anyone knows help me to solve this out. Code For Setting Alarm time in AlarmManager class Intent myIntent = new Intent(context, AlarmService.class); PendingIntent pendingIntent = PendingIntent.getService(context, i, myIntent, i); AlarmManager alarmManager = (AlarmManager)context.getSystemService(AlarmService.ALARM_SERVICE); Calendar calendar = Calendar.getInstance(); calendar.setTimeInMillis(System.currentTimeMillis()); calendar.add(Calendar.MILLISECOND, (int) dateDifferenceFromSystemTime(NoteManager.getSingletonObject().getAlarmTime(i))); alarmManager.set(AlarmManager.RTC_WAKEUP, calendar.getTimeInMillis(), pendingIntent); public static long dateDifferenceFromSystemTime(Date date) { long difference = 0; try { Calendar c = Calendar.getInstance(); difference = date.getTime() - c.getTimeInMillis(); if (difference < 0) { // if difference is -1 - means alarm time is of previous time then current // then firstly change it to +positive and subtract form 86400000 to get exact new time to play alarm // 86400000-Total no of milliseconds of 24hr Day difference = difference * -1; difference = 86400000 - difference; } } catch (Exception e) { e.printStackTrace(); } return difference; } Service class which pop-up alarm when matches time public class AlarmService extends IntentService { public void onCreate() { super.onCreate(); } public AlarmService() { super("MyAlarmService"); } @Override public int onStartCommand(Intent intent, int flags, int startId) { super.onStartCommand(intent, startId, startId); return START_STICKY; } @Override protected void onHandleIntent(Intent intent) { startActivity(new Intent(this,AlarmDialogActivity.class).setFlags(Intent.FLAG_ACTIVITY_NEW_TASK)); } }

    Read the article

  • When does an ARM7 processor increase its PC register?

    - by Summer_More_More_Tea
    Hi everyone: I'm thinking about this question for a time: when does an ARM7(with 3 pipelines) processor increase its PC register. I originally thought that after an instruction has been executed, the processor first check is there any exception in the last execution, then increase PC by 2 or 4 depending on current state. If an exception occur, ARM7 will change its running mode, store PC in the LR of current mode and begin to process current exception without modifying the PC register. But it make no sense when analyzing returning instructions. I can not work out why PC will be assigned LR when returning from an undefined-instruction-exception while LR-4 from prefetch-abort-exception, don't both of these exceptions happened at the decoding state? What's more, according to my textbook, PC will always be assigned LR-4 when returning from prefetch-abort-exception no matter what state the processor is(ARM or Thumb) before exception occurs. However, I think PC should be assigned LR-2 if the original state is Thumb, since a Thumb-instruction is 2 bytes long instead of 4 bytes which an ARM-instruction holds, and we just wanna roll-back an instruction in current state. Is there any flaws in my reasoning or something wrong with the textbook. Seems a long question. I really hope anyone can help me get the right answer. Thanks in advance.

    Read the article

  • Best strategy for synching data in iPhone app

    - by iamj4de
    I am working on a regular iPhone app which pulls data from a server (XML, JSON, etc...), and I'm wondering what is the best way to implement synching data. Criteria are speed (less network data exchange), robustness (data recovery in case update fails), offline access and flexibility (adaptable when the structure of the database changes slightly, like a new column). I know it varies from app to app, but can you guys share some of your strategy/experience? For me, I'm thinking of something like this: 1) Store Last Modified Date in iPhone 2) Upon launching, send a message like getNewData.php?lastModifiedDate=... 3) Server will process and send back only modified data from last time. 4) This data is formatted as so: <+><data id="..."></data></+> // add this to SQLite/CoreData <-><data id="..."></data></-> // remove this <%><data id="..."><attribute>newValue</attribute></data></%> // new modified value I don't want to make <+, <-, <%... for each attribute as well, because it would be too complicated, so probably when receive a <% field, I would just remove the data with the specified id and then add it again (assuming id here is not some automatically auto-incremented field). 5) Once everything is downloaded and updated, I will update the Last Modified Date field. The main problem with this strategy is: If the network goes down when I am updating something = the Last Modified Date is not yet updated = next time I relaunch the app, I will have to go through the same thing again. Not to mention potential inconsistent data. If I use a temporary table for update and make the whole thing atomic, it would work, but then again, if the update is too long (lots of data change), the user has to wait a long time until new data is available. Should I use Last-Modified-Date for each of the data field and update data gradually?

    Read the article

< Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >