Search Results

Search found 31421 results on 1257 pages for 'software performance'.

Page 1219/1257 | < Previous Page | 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226  | Next Page >

  • How to reduce Entity Framework 4 query compile time?

    - by Rup
    Summary: We're having problems with EF4 query compilation times of 12+ seconds. Cached queries will only get us so far; are there any ways we can actually reduce the compilation time? Is there anything we might be doing wrong we can look for? Thanks! We have an EF4 model which is exposed over the WCF services. For each of our entity types we expose a method to fetch and return the whole entity for display / edit including a number of referenced child objects. For one particular entity we have to .Include() 31 tables / sub-tables to return all relevant data. Unfortunately this makes the EF query compilation prohibitively slow: it takes 12-15 seconds to compile and builds a 7,800-line, 300K query. This is the back-end of a web UI which will need to be snappier than that. Is there anything we can do to improve this? We can CompiledQuery.Compile this - that doesn't do any work until first use and so helps the second and subsequent executions but our customer is nervous that the first usage shouldn't be slow either. Similarly if the IIS app pool hosting the web service gets recycled we'll lose the cached plan, although we can increase lifetimes to minimise this. Also I can't see a way to precompile this ahead of time and / or to serialise out the EF compiled query cache (short of reflection tricks). The CompiledQuery object only contains a GUID reference into the cache so it's the cache we really care about. (Writing this out it occurs to me I can kick off something in the background from app_startup to execute all queries to get them compiled - is that safe?) However even if we do solve that problem, we build up our search queries dynamically with LINQ-to-Entities clauses based on which parameters we're searching on: I don't think the SQL generator does a good enough job that we can move all that logic into the SQL layer so I don't think we can pre-compile our search queries. This is less serious because the search data results use fewer tables and so it's only 3-4 seconds compile not 12-15 but the customer thinks that still won't really be acceptable to end-users. So we really need to reduce the query compilation time somehow. Any ideas? Profiling points to ELinqQueryState.GetExecutionPlan as the place to start and I have attempted to step into that but without the real .NET 4 source available I couldn't get very far, and the source generated by Reflector won't let me step into some functions or set breakpoints in them. The project was upgraded from .NET 3.5 so I have tried regenerating the EDMX from scratch in EF4 in case there was something wrong with it but that didn't help. I have tried the EFProf utility advertised here but it doesn't look like it would help with this. My large query crashes its data collector anyway. I have run the generated query through SQL performance tuning and it already has 100% index usage. I can't see anything wrong with the database that would cause the query generator problems. Is there something O(n^2) in the execution plan compiler - is breaking this down into blocks of separate data loads rather than all 32 tables at once likely to help? Setting EF to lazy-load didn't help. I've bought the pre-release O'Reilly Julie Lerman EF4 book but I can't find anything in there to help beyond 'compile your queries'. I don't understand why it's taking 12-15 seconds to generate a single select across 32 tables so I'm optimistic there's some scope for improvement! Thanks for any suggestions! We're running against SQL Server 2008 in case that matters and XP / 7 / server 2008 R2 using RTM VS2010.

    Read the article

  • Generate reasonable length license key with asymmetric encryption?

    - by starkos
    I've been looking at this all day. I probably should have walked away from it hours ago; I might be missing something obvious at this point. Short version: Is there a way to generate and boil down an asymmetrically encrypted hash to a reasonable number of unambiguous, human readable characters? Long version: I want to generate license keys for my software. I would like these keys to be of a reasonable length (25-36 characters) and easily read and entered by a human (so avoid ambiguous characters like the number 0 and the capital letter O). Finally--and this seems to be the kicker--I'd really like to use asymmetric encryption to make it more difficult to generate new keys. I've got the general approach: concatenate my information (user name, product version, a salt) into a string and generate a SHA1() hash from that, then encrypt the hash with my private key. On the client, build the SHA1() hash from the same information, then decrypt the license with the public key and see if I've got a match. Since this is a Mac app, I looked at AquaticPrime, but that generates a relatively large license file rather than a string. I can work with that if I must, but as a user I really like the convenience of a license key that I can read and print. I also looked at CocoaFob which does generate a key, but it is so long that I'd want to deliver it as a file anyway. I fooled around with OpenSSL for a while but couldn't come up with anything of a reasonable length. So...am I missing something obvious here? Is there a way to generate and boil down an asymmetrically encrypted hash to a reasonable number of unambiguous, human readable characters? I'm open to buying a solution. But I work on a number of different of platforms, so I'd want something portable. Everything I've looked at so far has been platform specific. Many, many thanks for a solution! PS - Yes, I know it will still be cracked. I'm trying to come up with something reasonable that, as a user, I would still find friendly.

    Read the article

  • Thread Synchronisation 101

    - by taspeotis
    Previously I've written some very simple multithreaded code, and I've always been aware that at any time there could be a context switch right in the middle of what I'm doing, so I've always guarded access the shared variables through a CCriticalSection class that enters the critical section on construction and leaves it on destruction. I know this is fairly aggressive and I enter and leave critical sections quite frequently and sometimes egregiously (e.g. at the start of a function when I could put the CCriticalSection inside a tighter code block) but my code doesn't crash and it runs fast enough. At work my multithreaded code needs to be a tighter, only locking/synchronising at the lowest level needed. At work I was trying to debug some multithreaded code, and I came across this: EnterCriticalSection(&m_Crit4); m_bSomeVariable = true; LeaveCriticalSection(&m_Crit4); Now, m_bSomeVariable is a Win32 BOOL (not volatile), which as far as I know is defined to be an int, and on x86 reading and writing these values is a single instruction, and since context switches occur on an instruction boundary then there's no need for synchronising this operation with a critical section. I did some more research online to see whether this operation did not need synchronisation, and I came up with two scenarios it did: The CPU implements out of order execution or the second thread is running on a different core and the updated value is not written into RAM for the other core to see; and The int is not 4-byte aligned. I believe number 1 can be solved using the "volatile" keyword. In VS2005 and later the C++ compiler surrounds access to this variable using memory barriers, ensuring that the variable is always completely written/read to the main system memory before using it. Number 2 I cannot verify, I don't know why the byte alignment would make a difference. I don't know the x86 instruction set, but does mov need to be given a 4-byte aligned address? If not do you need to use a combination of instructions? That would introduce the problem. So... QUESTION 1: Does using the "volatile" keyword (implicity using memory barriers and hinting to the compiler not to optimise this code) absolve a programmer from the need to synchronise a 4-byte/8-byte on x86/x64 variable between read/write operations? QUESTION 2: Is there the explicit requirement that the variable be 4-byte/8-byte aligned? I did some more digging into our code and the variables defined in the class: class CExample { private: CRITICAL_SECTION m_Crit1; // Protects variable a CRITICAL_SECTION m_Crit2; // Protects variable b CRITICAL_SECTION m_Crit3; // Protects variable c CRITICAL_SECTION m_Crit4; // Protects variable d // ... }; Now, to me this seems excessive. I thought critical sections synchronised threads between a process, so if you've got one you can enter it and no other thread in that process can execute. There is no need for a critical section for each variable you want to protect, if you're in a critical section then nothing else can interrupt you. I think the only thing that can change the variables from outside a critical section is if the process shares a memory page with another process (can you do that?) and the other process starts to change the values. Mutexes would also help here, named mutexes are shared across processes, or only processes of the same name? QUESTION 3: Is my analysis of critical sections correct, and should this code be rewritten to use mutexes? I have had a look at other synchronisation objects (semaphores and spinlocks), are they better suited here? QUESTION 4: Where are critical sections/mutexes/semaphores/spinlocks best suited? That is, which synchronisation problem should they be applied to. Is there a vast performance penalty for choosing one over the other? And while we're on it, I read that spinlocks should not be used in a single-core multithreaded environment, only a multi-core multithreaded environment. So, QUESTION 5: Is this wrong, or if not, why is it right? Thanks in advance for any responses :)

    Read the article

  • Win32 window capture with BitBlt not displaying border

    - by user292533
    I have written some c++ code to capture a window to a .bmp file. BITMAPFILEHEADER get_bitmap_file_header(int width, int height) { BITMAPFILEHEADER hdr; memset(&hdr, 0, sizeof(BITMAPFILEHEADER)); hdr.bfType = ((WORD) ('M' << 8) | 'B'); // is always "BM" hdr.bfSize = 0;//sizeof(BITMAPFILEHEADER) + sizeof(BITMAPINFOHEADER) + (width * height * sizeof(int)); hdr.bfReserved1 = 0; hdr.bfReserved2 = 0; hdr.bfOffBits = (DWORD)(sizeof(BITMAPFILEHEADER) + sizeof(BITMAPINFOHEADER)); return hdr; } BITMAPINFO get_bitmap_info(int width, int height) { BITMAPINFO bmi; memset(&bmi.bmiHeader, 0, sizeof(BITMAPINFOHEADER)); //initialize bitmap header bmi.bmiHeader.biSize = sizeof(BITMAPINFOHEADER); bmi.bmiHeader.biWidth = width; bmi.bmiHeader.biHeight = height; bmi.bmiHeader.biPlanes = 1; bmi.bmiHeader.biBitCount = 4 * 8; bmi.bmiHeader.biCompression = BI_RGB; bmi.bmiHeader.biSizeImage = width * height * 4; return bmi; } void get_bitmap_from_window(HWND hWnd, int * imageBuff) { HDC hDC = GetWindowDC(hWnd); SIZE size = get_window_size(hWnd); HDC hMemDC = CreateCompatibleDC(hDC); RECT r; HBITMAP hBitmap = CreateCompatibleBitmap(hDC, size.cx, size.cy); HBITMAP hOld = (HBITMAP)SelectObject(hMemDC, hBitmap); BitBlt(hMemDC, 0, 0, size.cx, size.cy, hDC, 0, 0, SRCCOPY); //PrintWindow(hWnd, hMemDC, 0); BITMAPINFO bmi = get_bitmap_info(size.cx, size.cy); GetDIBits(hMemDC, hBitmap, 0, size.cy, imageBuff, &bmi, DIB_RGB_COLORS); SelectObject(hMemDC, hOld); DeleteDC(hMemDC); ReleaseDC(NULL, hDC); } void save_image(HWND hWnd, char * name) { int * buff; RECT r; SIZE size; GetWindowRect(hWnd, &r); size.cx = r.right-r.left; size.cy = r.bottom-r.top; buff = (int*)malloc(size.cx * size.cy * sizeof(int)); get_bitmap_from_window(hWnd, buff); BITMAPINFO bmi = get_bitmap_info(size.cx, size.cy); BITMAPFILEHEADER hdr = get_bitmap_file_header(size.cx, size.cy); FILE * fout = fopen(name, "w"); fwrite(&hdr, 1, sizeof(BITMAPFILEHEADER), fout); fwrite(&bmi.bmiHeader, 1, sizeof(BITMAPINFOHEADER), fout); fwrite(buff, 1, size.cx * size.cy * sizeof(int), fout); fflush(fout); fclose(fout); free(buff); } It works find under XP, but under Vista the border of the window is transparent. Using PrintWindow solves the problem, but is unacceptable for performance reasons. Is there a performant code change, or a setting that can be changed to make the border non-transparent?

    Read the article

  • Paging, sorting and filtering in a stored procedure (SQL Server)

    - by Fruitbat
    I was looking at different ways of writing a stored procedure to return a "page" of data. This was for use with the asp ObjectDataSource, but it could be considered a more general problem. The requirement is to return a subset of the data based on the usual paging paremeters, startPageIndex and maximumRows, but also a sortBy parameter to allow the data to be sorted. Also there are some parameters passed in to filter the data on various conditions. One common way to do this seems to be something like this: [Method 1] ;WITH stuff AS ( SELECT CASE WHEN @SortBy = 'Name' THEN ROW_NUMBER() OVER (ORDER BY Name) WHEN @SortBy = 'Name DESC' THEN ROW_NUMBER() OVER (ORDER BY Name DESC) WHEN @SortBy = ... ELSE ROW_NUMBER() OVER (ORDER BY whatever) END AS Row, ., ., ., FROM Table1 INNER JOIN Table2 ... LEFT JOIN Table3 ... WHERE ... (lots of things to check) ) SELECT * FROM stuff WHERE (Row > @startRowIndex) AND (Row <= @startRowIndex + @maximumRows OR @maximumRows <= 0) ORDER BY Row One problem with this is that it doesn't give the total count and generally we need another stored procedure for that. This second stored procedure has to replicate the parameter list and the complex WHERE clause. Not nice. One solution is to append an extra column to the final select list, (SELECT COUNT(*) FROM stuff) AS TotalRows. This gives us the total but repeats it for every row in the result set, which is not ideal. [Method 2] An interesting alternative is given here (http://www.4guysfromrolla.com/articles/032206-1.aspx) using dynamic SQL. He reckons that the performance is better because the CASE statement in the first solution drags things down. Fair enough, and this solution makes it easy to get the totalRows and slap it into an output parameter. But I hate coding dynamic SQL. All that 'bit of SQL ' + STR(@parm1) +' bit more SQL' gubbins. [Method 3] The only way I can find to get what I want, without repeating code which would have to be synchronised, and keeping things reasonably readable is to go back to the "old way" of using a table variable: DECLARE @stuff TABLE (Row INT, ...) INSERT INTO @stuff SELECT CASE WHEN @SortBy = 'Name' THEN ROW_NUMBER() OVER (ORDER BY Name) WHEN @SortBy = 'Name DESC' THEN ROW_NUMBER() OVER (ORDER BY Name DESC) WHEN @SortBy = ... ELSE ROW_NUMBER() OVER (ORDER BY whatever) END AS Row, ., ., ., FROM Table1 INNER JOIN Table2 ... LEFT JOIN Table3 ... WHERE ... (lots of things to check) SELECT * FROM stuff WHERE (Row > @startRowIndex) AND (Row <= @startRowIndex + @maximumRows OR @maximumRows <= 0) ORDER BY Row (Or a similar method using an IDENTITY column on the table variable). Here I can just add a SELECT COUNT on the table variable to get the totalRows and put it into an output parameter. I did some tests and with a fairly simple version of the query (no sortBy and no filter), method 1 seems to come up on top (almost twice as quick as the other 2). Then I decided to test probably I needed the complexity and I needed the SQL to be in stored procedures. With this I get method 1 taking nearly twice as long as the other 2 methods. Which seems strange. Is there any good reason why I shouldn't spurn CTEs and stick with method 3? UPDATE - 15 March 2012 I tried adapting Method 1 to dump the page from the CTE into a temporary table so that I could extract the TotalRows and then select just the relevant columns for the resultset. This seemed to add significantly to the time (more than I expected). I should add that I'm running this on a laptop with SQL Server Express 2008 (all that I have available) but still the comparison should be valid. I looked again at the dynamic SQL method. It turns out I wasn't really doing it properly (just concatenating strings together). I set it up as in the documentation for sp_executesql (with a parameter description string and parameter list) and it's much more readable. Also this method runs fastest in my environment. Why that should be still baffles me, but I guess the answer is hinted at in Hogan's comment.

    Read the article

  • Select number of rows for each group where two column values makes one group

    - by Fábio Antunes
    I have a two select statements joined by UNION ALL. In the first statement a where clause gathers only rows that have been shown previously to the user. The second statement gathers all rows that haven't been shown to the user, therefore I end up with the viewed results first and non-viewed results after. Of course this could simply be achieved with the same select statement using a simple ORDER BY, however the reason for two separate selects is simple after you realize what I hope to accomplish. Consider the following structure and data. +----+------+-----+--------+------+ | id | from | to | viewed | data | +----+------+-----+--------+------+ | 1 | 1 | 10 | true | .... | | 2 | 10 | 1 | true | .... | | 3 | 1 | 10 | true | .... | | 4 | 6 | 8 | true | .... | | 5 | 1 | 10 | true | .... | | 6 | 10 | 1 | true | .... | | 7 | 8 | 6 | true | .... | | 8 | 10 | 1 | true | .... | | 9 | 6 | 8 | true | .... | | 10 | 2 | 3 | true | .... | | 11 | 1 | 10 | true | .... | | 12 | 8 | 6 | true | .... | | 13 | 10 | 1 | false | .... | | 14 | 1 | 10 | false | .... | | 15 | 6 | 8 | false | .... | | 16 | 10 | 1 | false | .... | | 17 | 8 | 6 | false | .... | | 18 | 3 | 2 | false | .... | +----+------+-----+--------+------+ Basically I wish all non viewed rows to be selected by the statement, that is accomplished by checking weather the viewed column is true or false, pretty simple and straightforward, nothing to worry here. However when it comes to the rows already viewed, meaning the column viewed is TRUE, for those records I only want 3 rows to be returned for each group. The appropriate result in this instance should be the 3 most recent rows of each group. +----+------+-----+--------+------+ | id | from | to | viewed | data | +----+------+-----+--------+------+ | 6 | 10 | 1 | true | .... | | 7 | 8 | 6 | true | .... | | 8 | 10 | 1 | true | .... | | 9 | 6 | 8 | true | .... | | 10 | 2 | 3 | true | .... | | 11 | 1 | 10 | true | .... | | 12 | 8 | 6 | true | .... | +----+------+-----+--------+------+ As you see from the ideal result set we have three groups. Therefore the desired query for the viewed results should show a maximum of 3 rows for each grouping it finds. In this case these groupings were 10 with 1 and 8 with 6, both which had three rows to be shown, while the other group 2 with 3 only had one row to be shown. Please note that where from = x and to = y, makes the same grouping as if it was from = y and to = x. Therefore considering the first grouping (10 with 1), from = 10 and to = 1 is the same group if it was from = 1 and to = 10. However there are plenty of groups in the whole table that I only wish the 3 most recent of each to be returned in the select statement, and thats my problem, I not sure how that can be accomplished in the most efficient way possible considering the table will have hundreds if not thousands of records at some point. Thanks for your help. Note: The columns id, from, to and viewed are indexed, that should help with performance. PS: I'm unsure on how to name this question exactly, if you have a better idea, be my guest and edit the title.

    Read the article

  • List input and output audio devices in Applet

    - by Jhonny Everson
    I am running a signed applet that needs to provide the ability for the user to select the input and output audio devices ( similar to what skype provides). I borrowed the following code from other thread: import javax.sound.sampled.*; public class SoundAudit { public static void main(String[] args) { try { System.out.println("OS: "+System.getProperty("os.name")+" "+ System.getProperty("os.version")+"/"+ System.getProperty("os.arch")+"\nJava: "+ System.getProperty("java.version")+" ("+ System.getProperty("java.vendor")+")\n"); for (Mixer.Info thisMixerInfo : AudioSystem.getMixerInfo()) { System.out.println("Mixer: "+thisMixerInfo.getDescription()+ " ["+thisMixerInfo.getName()+"]"); Mixer thisMixer = AudioSystem.getMixer(thisMixerInfo); for (Line.Info thisLineInfo:thisMixer.getSourceLineInfo()) { if (thisLineInfo.getLineClass().getName().equals( "javax.sound.sampled.Port")) { Line thisLine = thisMixer.getLine(thisLineInfo); thisLine.open(); System.out.println(" Source Port: " +thisLineInfo.toString()); for (Control thisControl : thisLine.getControls()) { System.out.println(AnalyzeControl(thisControl));} thisLine.close();}} for (Line.Info thisLineInfo:thisMixer.getTargetLineInfo()) { if (thisLineInfo.getLineClass().getName().equals( "javax.sound.sampled.Port")) { Line thisLine = thisMixer.getLine(thisLineInfo); thisLine.open(); System.out.println(" Target Port: " +thisLineInfo.toString()); for (Control thisControl : thisLine.getControls()) { System.out.println(AnalyzeControl(thisControl));} thisLine.close();}}} } catch (Exception e) {e.printStackTrace();}} public static String AnalyzeControl(Control thisControl) { String type = thisControl.getType().toString(); if (thisControl instanceof BooleanControl) { return " Control: "+type+" (boolean)"; } if (thisControl instanceof CompoundControl) { System.out.println(" Control: "+type+ " (compound - values below)"); String toReturn = ""; for (Control children: ((CompoundControl)thisControl).getMemberControls()) { toReturn+=" "+AnalyzeControl(children)+"\n";} return toReturn.substring(0, toReturn.length()-1);} if (thisControl instanceof EnumControl) { return " Control:"+type+" (enum: "+thisControl.toString()+")";} if (thisControl instanceof FloatControl) { return " Control: "+type+" (float: from "+ ((FloatControl) thisControl).getMinimum()+" to "+ ((FloatControl) thisControl).getMaximum()+")";} return " Control: unknown type";} } But what I get: Mixer: Software mixer and synthesizer [Java Sound Audio Engine] Mixer: No details available [Microphone (Pink Front)] I was expecting the get the real list of my devices (My preferences panels shows 3 output devices and 1 Microphone). I am running on Mac OS X 10.6.7. Is there other way to get that info from Java?

    Read the article

  • Is the Scala 2.8 collections library a case of "the longest suicide note in history" ?

    - by oxbow_lakes
    First note the inflammatory subject title is a quotation made about the manifesto of a UK political party in the early 1980s. This question is subjective but it is a genuine question, I've made it CW and I'd like some opinions on the matter. Despite whatever my wife and coworkers keep telling me, I don't think I'm an idiot: I have a good degree in mathematics from the University of Oxford and I've been programming commercially for almost 12 years and in Scala for about a year (also commercially). I have just started to look at the Scala collections library re-implementation which is coming in the imminent 2.8 release. Those familiar with the library from 2.7 will notice that the library, from a usage perspective, has changed little. For example... > List("Paris", "London").map(_.length) res0: List[Int] List(5, 6) ...would work in either versions. The library is eminently useable: in fact it's fantastic. However, those previously unfamiliar with Scala and poking around to get a feel for the language now have to make sense of method signatures like: def map[B, That](f: A => B)(implicit bf: CanBuildFrom[Repr, B, That]): That For such simple functionality, this is a daunting signature and one which I find myself struggling to understand. Not that I think Scala was ever likely to be the next Java (or /C/C++/C#) - I don't believe its creators were aiming it at that market - but I think it is/was certainly feasible for Scala to become the next Ruby or Python (i.e. to gain a significant commercial user-base) Is this going to put people off coming to Scala? Is this going to give Scala a bad name in the commercial world as an academic plaything that only dedicated PhD students can understand? Are CTOs and heads of software going to get scared off? Was the library re-design a sensible idea? If you're using Scala commercially, are you worried about this? Are you planning to adopt 2.8 immediately or wait to see what happens? Steve Yegge once attacked Scala (mistakenly in my opinion) for what he saw as its overcomplicated type-system. I worry that someone is going to have a field day spreading fud with this API (similarly to how Josh Bloch scared the JCP out of adding closures to Java). Note - I should be clear that, whilst I believe that Josh Bloch was influential in the rejection of the BGGA closures proposal, I don't ascribe this to anything other than his honestly-held beliefs that the proposal represented a mistake.

    Read the article

  • How to embed html table into the bpdy of email

    - by Michael Mao
    Hi all: I am sending info to target email via PHP native mail() method right now. Everything else works fine but the table part troubles me the most. See sample output : Dear Michael Mao : Thank you for purchasing flight tickets with us, here is your receipt : Your tickets will be delivered by mail to the following address : Street Address 1 : sdfsdafsadf sdf Street Address 2 : N/A City : Sydney State : nsw Postcode : 2 Country : Australia Credit Card Number : *************1234 Your purchase details are recorded as : <table><tr><th class="delete">del?</th><th class="from_city">from</th><th class="to_city">to</th><th class="quantity">qty</th><th class="price">unit price</th><th class="price">total price</th></tr><tr class="evenrow" id="Sydney-Lima"><td><input name="isDeleting" type="checkbox"></td><td>Sydney</td><td>Lima</td><td>1</td><td>1030.00</td><td>1030</td></tr><tr class="oddrow" id="Sydney-Perth"><td><input name="isDeleting" type="checkbox"></td><td>Sydney</td><td>Perth</td><td>3</td><td>340.00</td><td>1020</td></tr><tr class="totalprice"><td colspan="5">Grand Total Price</td><td id="grandtotal">2050</td></tr></table> The source of table is directly taken from a webpage, exactly as the same. However, Gmail, Hotmail and most of other emails will ignore to render this as a table. So I am wondering, without using Outlook or other email sending agent software, how could I craft a embedded table for the PHP mail() method to send? Current code snippet corresponds to table generation : $purchaseinfo = $_POST["purchaseinfo"]; //if html tags are not to be filtered in the body of email $stringBuilder .= "<table>" .stripslashes($purchaseinfo) ."</table>"; //must send json response back to caller ajax request if(mail($email, 'Your purchase information on www.hardlyworldtravel.com', $emailbody, $headers)) echo json_encode(array("feedback"=>"successful")); else echo json_encode(array("feedback"=>"error")); Any hints and suggestions are welcomed, thanks a lot in advance.

    Read the article

  • How do you attract programmers in rural areas?

    - by Reed Copsey
    I run a software development group for a very small, but stable and established company in a small town, somewhat outside of the "big city". Unfortunately, the "programmer" labor pool is much smaller due to the size of the city. There are many positives to working in this area, especially in terms of quality of life (particularly for people interested in outdoor activities), lower cost of living, great schools and neighborhoods, etc. However, I've always had difficulty attracting high-qualtiy, experienced developers. For those of you who hire developers outside of large cities: Where do you advertise to find good developers? Many of the large sites are very focused in certain metropolitan areas, and seem inappropriate places to advertise if you're outside of that main region. How do you attract quality developers to rural (or at least less metropolitan) locations? Do you find that you make more sacrifices in your hiring due to a smaller labor pool? Or do you just wait, and take extra time to attract people? What sacrifices do you expect to make if you are outside of the main developer-rich cities? For all of the developers out there... What would entice you to working in a smaller town? Are there things that would stand out and make you willing to relocate or at least apply to a position that was not nearby? What specific qualities would help you want to move outside of the city? In the past, I've had difficulty with finding good people. Most of the people who've applied and been willing to move out to a more rural location seem like the types that can't keep a quality job elsewhere. I'd like to know what advice people have to attracting quality technical staff. I don't believe its the work itself that's been the problem - The work is both interesting and challenging, and nearly 100% new development. The developers I have seem very happy with their situation - they love the work, the atmosphere, etc. It's more a matter of finding willing, able developers. Edit: More info after the first couple of answers: Right now, some of my best developers telecommute (some work from overseas); however, for this question, I'm trying to figure out how to get people who want to live and work full time locally. I need some people with whom I interact every day.

    Read the article

  • PL/SQL pre-compile and Code Quality checks in an automatted build environment?

    - by Lars Corneliussen
    We build software using Hudson and Maven. We have C#, java and last, but not least PL/SQL sources (sprocs, packages, DDL, crud) For C# and Java we do unit tests and code analysis, but we don't really know the health of our PL/SQL sources before we actually publish them to the target database. Requirements There are a couple of things we wan't to test in the following priority: Are the sources valid, hence "compilable"? For packages, with respect to a certain database, would they compile? Code Quality: Do we have code flaws like duplicates, too complex methods or other violations to a defined set of rules? Also, the tool must run head-less (commandline, ant, ...) we wan't to do analysis on a partial code base (changed sources only) Tools We did a little research and found the following tools that could potencially help: Cast Application Intelligence Platform (AIP): Seems to be a server that grasps information about "anything". Couldn't find a console version that would export in readable format. Toad for Oracle: The Professional version is said to include something called Xpert validates a set of rules against a code base. Sonar + PL/SQL-Plugin: Uses Toad for Oracle to display code-health the sonar-way. This is for browsing the current state of the code base. Semantic Designs DMSToolkit: Quite general analysis of source code base. Commandline available? Semantic Designs Clones Detector: Detects clones. But also via command line? Fortify Source Code Analyzer: Seems to be focussed on security issues. But maybe it is extensible? more... So far, Toad for Oracle together with Sonar seems to be an elegant solution. But may be we are missing something here? Any ideas? Other products? Experiences? Related Questions on SO: http://stackoverflow.com/questions/531430/any-static-code-analysis-tools-for-stored-procedures http://stackoverflow.com/questions/839707/any-code-quality-tool-for-pl-sql http://stackoverflow.com/questions/956104/is-there-a-static-analysis-tool-for-python-ruby-sql-cobol-perl-and-pl-sql

    Read the article

  • Reading Source Code Aloud

    - by Jon Purdy
    After seeing this question, I got to thinking about the various challenges that blind programmers face, and how some of them are applicable even to sighted programmers. Particularly, the problem of reading source code aloud gives me pause. I have been programming for most of my life, and I frequently tutor fellow students in programming, most often in C++ or Java. It is uniquely aggravating to try to verbally convey the essential syntax of a C++ expression. The speaker must give either an idiomatic translation into English, or a full specification of the code in verbal longhand, using explicit yet slow terms such as "opening parenthesis", "bitwise and", et cetera. Neither of these solutions is optimal. On the one hand, an idiomatic translation is only useful to a programmer who can de-translate back into the relevant programming code—which is not usually the case when tutoring a student. In turn, education (or simply getting someone up to speed on a project) is the most common situation in which source is read aloud, and there is a very small margin for error. On the other hand, a literal specification is aggravatingly slow. It takes far far longer to say "pound, include, left angle bracket, iostream, right angle bracket, newline" than it does to simply type #include <iostream>. Indeed, most experienced C++ programmers would read this merely as "include iostream", but again, inexperienced programmers abound and literal specifications are sometimes necessary. So I've had an idea for a potential solution to this problem. In C++, there is a finite set of keywords—63—and operators—54, discounting named operators and treating compound assignment operators and prefix versus postfix auto-increment and decrement as distinct. There are just a few types of literal, a similar number of grouping symbols, and the semicolon. Unless I'm utterly mistaken, that's about it. So would it not then be feasible to simply ascribe a concise, unique pronunciation to each of these distinct concepts (including one for whitespace, where it is required) and go from there? Programming languages are far more regular than natural languages, so the pronunciation could be standardised. Speakers of any language would be able to verbally convey C++ code, and due to the regularity and fixity of the language, speech-to-text software could be optimised to accept C++ speech with a high degree of accuracy. So my question is twofold: first, is my solution feasible; and second, does anyone else have other potential solutions? I intend to take suggestions from here and use them to produce a formal paper with an example implementation of my solution.

    Read the article

  • Help with storing/accessing user access roles C# Winforms

    - by user222453
    Hello, firstly I would like to thank you in advance for any assistance provided. I am new to software development and have designed several Client/Server applications over the last 12 months or so, I am currently working on a project that involves a user logging in to gain access to the application and I am looking at the most efficient and "simple" method of storing the users permissions once logged in to the application which can be used throughout restricting access to certain tabs on the main form. I have created a static class called "User" detailed below: static class User { public static int _userID; public static string _moduleName; public static string _userName; public static object[] UserData(object[] _dataRow) { _userID = (int)_dataRow[0]; _userName = (string)_dataRow[1]; _moduleName = (string)_dataRow[2]; return _moduleName; } } When the user logs in and they have been authenticated, I wish to store the _moduleName objects in memory so I can control which tabs on the main form tab control they can access, for example; if the user has been assigned the following roles in the database: "Sales Ledger", "Purchase Ledger" they can only see the relevant tabs on the form, by way of using a Switch - Case block once the login form is hidden and the main form is instantiated. I can store the userID and userName variables in the main form once it loads by means of say for example: Here we process the login data from the user: DataAccess _dal = new DataAccess(); switch (_dal.ValidateLogin(txtUserName.Text, txtPassword.Text)) { case DataAccess.ValidationCode.ConnectionFailed: MessageBox.Show("Database Server Connection Failed!"); break; case DataAccess.ValidationCode .LoginFailed: MessageBox.Show("Login Failed!"); _dal.RecordLogin(out errMsg, txtUserName.Text, workstationID, false); break; case DataAccess.ValidationCode .LoginSucceeded: frmMain frmMain = new frmMain(); _dal.GetUserPrivList(out errMsg,2); //< here I access my DB and get the user permissions based on the current login. frmMain.Show(); this.Hide(); break; default: break; } private void frmMain_Load(object sender, EventArgs e) { int UserID = User._userID; } That works fine, however the _modules object contains mutiple permissions/roles depending on what has been set in the database, how can I store the multiple values and access them via a Switch-Case block? Thank you again in advance.

    Read the article

  • Looking for a better design: A readonly in-memory cache mechanism

    - by Dylan Lin
    Hi all, I have a Category entity (class), which has zero or one parent Category and many child Categories -- it's a tree structure. The Category data is stored in a RDBMS, so for better performance, I want to load all categories and cache them in memory while launching the applicaiton. Our system can have plugins, and we allow the plugin authors to access the Category Tree, but they should not modify the cached items and the tree(I think a non-readonly design might cause some subtle bugs in this senario), only the system knows when and how to refresh the tree. Here are some demo codes: public interface ITreeNode<T> where T : ITreeNode<T> { // No setter T Parent { get; } IEnumerable<T> ChildNodes { get; } } // This class is generated by O/R Mapping tool (e.g. Entity Framework) public class Category : EntityObject { public string Name { get; set; } } // Because Category is not stateless, so I create a cleaner view class for Category. // And this class is the Node Type of the Category Tree public class CategoryView : ITreeNode<CategoryView> { public string Name { get; private set; } #region ITreeNode Memebers public CategoryView Parent { get; private set; } private List<CategoryView> _childNodes; public IEnumerable<CategoryView> ChildNodes { return _childNodes; } #endregion public static CategoryView CreateFrom(Category category) { // here I can set the CategoryView.Name property } } So far so good. However, I want to make ITreeNode interface reuseable, and for some other types, the tree should not be readonly. We are not able to do this with the above readonly ITreeNode, so I want the ITreeNode to be like this: public interface ITreeNode<T> { // has setter T Parent { get; set; } // use ICollection<T> instead of IEnumerable<T> ICollection<T> ChildNodes { get; } } But if we make the ITreeNode writable, then we cannot make the Category Tree readonly, it's not good. So I think if we can do like this: public interface ITreeNode<T> { T Parent { get; } IEnumerable<T> ChildNodes { get; } } public interface IWritableTreeNode<T> : ITreeNode<T> { new T Parent { get; set; } new ICollection<T> ChildNodes { get; } } Is this good or bad? Are there some better designs? Thanks a lot! :)

    Read the article

  • Referencing both an old version and new version of the same DLL (VB.Net)

    - by ckittel
    Consider the following situation: WidgetCompany produced a .NET DLL in 2006 called Widget.dll, version 1.0. I consumed this Widget.dll file throughout my VB.Net application. Over time, WidgetCompany has been updating Widget.dll, I never bothered to keep up, continuing to ship version 1.0 of Widget.dll with my software. It's now 2010, my project is now a VB.Net 3.5 application and WidgetCompany has come out with Widget.dll version 3.0. It looks and functions almost identical to Widget.dll version 1.0, using all the same namespaces and type names from before. However, Widget.dll version 3.0 has many run-time breaking changes since 1.0 and I cannot simply cut over to the new version; however, I don't want to continue developing against the 1.0 version and therefore keep digging myself deeper in the hole. What I want to do is do all new development in my project with Widget.dll version 3.0, whilst keeping Widget.dll version 1.0 around until I find time to convert all of my 1.0 consumption to the newer 3.0 code. Now, for starters, I obviously cannot simply reference both Widget.dll (Ver 1.0) and Widget.dll (Ver 3.0) in Visual Studio. Doing so gives me the following message: "A reference to 'Widget.dll' could not be added. A reference to the component 'Widget' already exists in the project." To work around that, I can simply rename version 3.0 Widget.dll to Widget.3.dll. But this is where I'm stuck. Any attempts to reference types found in "the dll" leads to ambiguity and the compiler obviously doesn't have any clue as to what I really want in this or that case. Is there something I can do that gives a DLL a new "root" Namespace or something? For example, if I could say "Widget.dll has a new root namespace of Legacy" then I could update existing code to reference the types found in Legacy.<RootNamespace> namespace while all new code could simply reference types from the <RootNamespace> namespace. Pipe dream or reality? Are there other solutions to situations this (besides "don't get in this situation in the first place")?

    Read the article

  • calling concurrently Graphics.Draw and new Bitmap from memory in thread take long time

    - by Abdul jalil
    Example1 public partial class Form1 : Form { public Form1() { InitializeComponent(); pro = new Thread(new ThreadStart(Producer)); con = new Thread(new ThreadStart(Consumer)); } private AutoResetEvent m_DataAvailableEvent = new AutoResetEvent(false); Queue<Bitmap> queue = new Queue<Bitmap>(); Thread pro; Thread con ; public void Producer() { MemoryStream[] ms = new MemoryStream[3]; for (int y = 0; y < 3; y++) { StreamReader reader = new StreamReader("image"+(y+1)+".JPG"); BinaryReader breader = new BinaryReader(reader.BaseStream); byte[] buffer=new byte[reader.BaseStream.Length]; breader.Read(buffer,0,buffer.Length); ms[y] = new MemoryStream(buffer); } while (true) { for (int x = 0; x < 3; x++) { Bitmap bmp = new Bitmap(ms[x]); queue.Enqueue(bmp); m_DataAvailableEvent.Set(); Thread.Sleep(6); } } } public void Consumer() { Graphics g= pictureBox1.CreateGraphics(); while (true) { m_DataAvailableEvent.WaitOne(); Bitmap bmp = queue.Dequeue(); if (bmp != null) { // Bitmap bmp = new Bitmap(ms); g.DrawImage(bmp,new Point(0,0)); bmp.Dispose(); } } } private void pictureBox1_Click(object sender, EventArgs e) { con.Start(); pro.Start(); } } when Creating bitmap and Drawing to picture box are in seperate thread then Bitmap bmp = new Bitmap(ms[x]) take 45.591 millisecond and g.DrawImage(bmp,new Point(0,0)) take 41.430 milisecond when i make bitmap from memoryStream and draw it to picture box in one thread then Bitmap bmp = new Bitmap(ms[x]) take 29.619 and g.DrawImage(bmp,new Point(0,0)) take 35.540 the code is for Example 2 is why it take more time to draw and bitmap take time in seperate thread and how to reduce the time when processing in seperate thread. i am using ANTS performance profiler 4.3 public Form1() { InitializeComponent(); pro = new Thread(new ThreadStart(Producer)); con = new Thread(new ThreadStart(Consumer)); } private AutoResetEvent m_DataAvailableEvent = new AutoResetEvent(false); Queue<MemoryStream> queue = new Queue<MemoryStream>(); Thread pro; Thread con ; public void Producer() { MemoryStream[] ms = new MemoryStream[3]; for (int y = 0; y < 3; y++) { StreamReader reader = new StreamReader("image"+(y+1)+".JPG"); BinaryReader breader = new BinaryReader(reader.BaseStream); byte[] buffer=new byte[reader.BaseStream.Length]; breader.Read(buffer,0,buffer.Length); ms[y] = new MemoryStream(buffer); } while (true) { for (int x = 0; x < 3; x++) { // Bitmap bmp = new Bitmap(ms[x]); queue.Enqueue(ms[x]); m_DataAvailableEvent.Set(); Thread.Sleep(6); } } } public void Consumer() { Graphics g= pictureBox1.CreateGraphics(); while (true) { m_DataAvailableEvent.WaitOne(); //Bitmap bmp = queue.Dequeue(); MemoryStream ms= queue.Dequeue(); if (ms != null) { Bitmap bmp = new Bitmap(ms); g.DrawImage(bmp,new Point(0,0)); bmp.Dispose(); } } } private void pictureBox1_Click(object sender, EventArgs e) { con.Start(); pro.Start(); }

    Read the article

  • Overriding Object.Equals() instance method in C#; now Code Analysis / FxCop warning CA2218: "should

    - by Chris W. Rea
    I've got a complex class in my C# project on which I want to be able to do equality tests. It is not a trivial class; it contains a variety of scalar properties as well as references to other objects and collections (e.g. IDictionary). For what it's worth, my class is sealed. To enable a performance optimization elsewhere in my system (an optimization that avoids a costly network round-trip), I need to be able to compare instances of these objects to each other for equality – other than the built-in reference equality – and so I'm overriding the Object.Equals() instance method. However, now that I've done that, Visual Studio 2008's Code Analysis a.k.a. FxCop, which I keep enabled by default, is raising the following warning: warning : CA2218 : Microsoft.Usage : Since 'MySuperDuperClass' redefines Equals, it should also redefine GetHashCode. I think I understand the rationale for this warning: If I am going to be using such objects as the key in a collection, the hash code is important. i.e. see this question. However, I am not going to be using these objects as the key in a collection. Ever. Feeling justified to suppress the warning, I looked up code CA2218 in the MSDN documentation to get the full name of the warning so I could apply a SuppressMessage attribute to my class as follows: [SuppressMessage("Microsoft.Naming", "CA2218:OverrideGetHashCodeOnOverridingEquals", Justification="This class is not to be used as key in a hashtable.")] However, while reading further, I noticed the following: How to Fix Violations To fix a violation of this rule, provide an implementation of GetHashCode. For a pair of objects of the same type, you must ensure that the implementation returns the same value if your implementation of Equals returns true for the pair. When to Suppress Warnings ----- Do not suppress a warning from this rule. [arrow & emphasis mine] So, I'd like to know: Why shouldn't I suppress this warning as I was planning to? Doesn't my case warrant suppression? I don't want to code up an implementation of GetHashCode() for this object that will never get called, since my object will never be the key in a collection. If I wanted to be pedantic, instead of suppressing, would it be more reasonable for me to override GetHashCode() with an implementation that throws a NotImplementedException? Update: I just looked this subject up again in Bill Wagner's good book Effective C#, and he states in "Item 10: Understand the Pitfalls of GetHashCode()": If you're defining a type that won't ever be used as the key in a container, this won't matter. Types that represent window controls, web page controls, or database connections are unlikely to be used as keys in a collection. In those cases, do nothing. All reference types will have a hash code that is correct, even if it is very inefficient. [...] In most types that you create, the best approach is to avoid the existence of GetHashCode() entirely. ... that's where I originally got this idea that I need not be concerned about GetHashCode() always.

    Read the article

  • What was "The Next Big Thing" when you were just starting out in programming?

    - by Andrew
    I'm at the beginning of my career and there are lots of things which are being touted as "The Next Big Thing". For example: Dependency Injection (Spring, etc) MVC (Struts, ASP.NET MVC) ORMs (Linq To SQL, Hibernate) Agile Software Development These things have probably been around for some time, but I've only just started out. And don't get me wrong, I think these things are great! So, what was "The Next Big Thing" when you were starting out? When was it? Were people sceptical of it at first? Why? Did you think it would catch on? Did it pan out and become widely accepted/used? If not, why not? EDIT It's been nearly a week since I first posted this question and I can safely say that I did not expect such explosive interest. I asked the question so that I could gain a perspective of what kinds of innovations in programming people thought were most important when they were starting out. At the time of writing this I have read ~95% of all answers. To answer a few questions, the "Next Big Things" I listed are ones that I am currently really excited about and that I had not really been exposed to until I started working. I'm hoping to implement some or all of these in the near future at my current workplace. To many people they are probably old news. In regards to the "is this a real question" debate, I can see that obviously hasn't been settled yet. I feel bad whenever I read a comment saying that these kinds of questions take away from the real meaning of SO. I'm not wholly convinced that it doesn't. On the other hand, I have seen a lot of comments saying what a great question it is. Anyway, I have chosen "The Internet!" as my answer to this question. I don't think (in my very humble opinion, and, it seems many SOers opinions) that many things related to programming can compare. Nowadays every business and their dog has a website which can do anything from simply supplying information to purchasing goods halfway around the world to updating your blog. And of course, all these businesses need people like us. Thanks to everyone for all the great answers!

    Read the article

  • Agilent E4426B signal generator locks up during multiple GPIB *SAV operations

    - by aspiehler
    I have a test fixture with an Agilent E4426B RF signal generator connected to a PC via a National Instrument Ethernet-to-GPIB bridge. My software is attempting to sanitize the instrument by presetting it and then saving the current state to all of the memory locations writable via the standard SCPI command "*SAV x,y". The loop works to a point, but eventually the instrument responds with an error and continuously displays the "L" icon on the front display and a "Remote preset" message at the bottom. At that point it won't respond to any more remote commands and I have to either cycle power or press LOCAL, then PRESET at which point it takes about 3 minutes to finish presetting. At that point the "L" icon is still present and and the next GPIB command sent to the instrument causes it to report a -113 error (undefined header) in the instrument error queue. I fired up NI spy to see what was happening, and found that the error was happening at the same point in the loop - "*SAV 6,2" in this case. From NI Spy: Send (0, 0x0017, "*SAV 6,2", 8 (0x8), NLend (0,0x01)) Process ID: 0x00000520 Thread ID: 0x00000518 ibsta:0xc168 iberr: 6 ibcntl: 2(0x2) And here's the code from the instrument driver: int CHP_E4426b::Erase() { if ((m_StatusCode = Initialize()) != GPIB_SUCCESS) // basically just sends "*RST" return m_StatusCode; m_SaveState = "*SAV %d, %d"; for (int i=0; i < 10; i++) for (int j=0; j < 100; j++) { sprintf(m_CmdString, m_SaveState, j, i); if ((m_StatusCode = Send(m_CmdString, strlen(m_CmdString))) != GPIB_SUCCESS) return m_StatusCode; } return GPIB_SUCCESS; } I tried putting a small Sleep() delay (10-20 ms) at the end of the inner loop, and to my surprise it caused the error to show up earlier rather than later. 10 ms caused the loop to error out at 44,1 and 20 ms was even sooner. I've already eliminated faulty cabling or the instrument as the culprit. This same type of sequence works without any error on a higher end signal generator, so I'm tempted to chalk this up to a bug in the instrument's firmware.

    Read the article

  • "Row not found or changed" Problem

    - by winston schröder
    Hi there, I'm working on a SQL CE Database and get into the "Row nor found or changed" exception. The exception only occurs when I try to update. On the first Run after the insert it shows up a MemberChangeConflict which says, that my Column Created_at has in all three values (current, original, database) the same. But in a second attempt it doesn't appear anymore. The DataContext is instanciated on Startup and freed on Exit of my Local(!) Application. I use a sqlmetal generated mapping and code file. In the map I added some Associations and set the timemstamp columns UpdateCheck property to Always while all other have the setting never. The Timestamp Column is marked as isVersion="true", the Id Column as Primary Key. Since I don't dispose the datacontext I expected to be using implicit transaction. When I run the SubmitChanges Method within a TransactionScope. Can anyone tell me how I can update the timestamp within the code ? I know about the Problems one has to deal with if you dispose the datacontext. So I decided not to do this since I use a Single User Local DB Cache File. (I did already use a version where I disposed the datacontext after every usage, but this version had a real bad performance and error rate, so I decided to choose the other variant.) LibDB.Client.Vehicles tmp = null; try { tmp = e.Parameter as LibDB.Client.Vehicles; if (tmp == null) return; if (!this._dc.Vehicles.Contains(tmp)) { this._dc.Vehicles.Attach(tmp); } this.ShowChangesReport(this._dc.GetChangeSet()); using (TransactionScope ts = new TransactionScope()) { try { this._dc.SubmitChanges(); ts.Complete(); } catch (ChangeConflictException cce) { Console.WriteLine("Optimistic concurrency error."); Console.WriteLine(cce.Message); Console.ReadLine(); foreach (ObjectChangeConflict occ in this._dc.ChangeConflicts) { MetaTable metatable = this._dc.Mapping.GetTable(occ.Object.GetType()); LibDB.Client.Vehicles entityInConflict = (LibDB.Client.Vehicles)occ.Object; Console.WriteLine("Table name: {0}", metatable.TableName); Console.Write("Vin: "); Console.WriteLine(entityInConflict.Vin); foreach (MemberChangeConflict mcc in occ.MemberConflicts) { object currVal = mcc.CurrentValue; object origVal = mcc.OriginalValue; object databaseVal = mcc.DatabaseValue; MemberInfo mi = mcc.Member; Console.WriteLine("Member: {0}", mi.Name); Console.WriteLine("current value: {0}", currVal); Console.WriteLine("original value: {0}", origVal); Console.WriteLine("database value: {0}", databaseVal); } throw cce; } } catch (Exception ex) { this.ShowChangeConflicts(this._dc.ChangeConflicts); Console.WriteLine(ex.Message); } } this.ShowChangesReport(this._dc.GetChangeSet());

    Read the article

  • mysql_connect() causes page to not display (WAMP)

    - by VOIDHand
    I currently have a website running MySQL and PHP in which this is all working. I've tried installing WAMPServer to be able to work on the site on my own computer, but I have been having issues trying to get the site to work correctly. HTML and PHP files work correctly (by going to http://localhost/index.php, etc.). But some of the pages display "This Webpage is not available" (in Chrome) or "Internet Explorer cannot display this webpage", which leads me to think there is an error is the server, as when the page doesn't exist, it typically dispays "Oops! This link appears to be broken" or IE's standard 404 page. Each page in my site has an include() call to set up an instance of a class to handle all database transactions. This class opens the connection to the database in its constructor. I have found commenting out the contents of the constructor will allow the page to load, although this understandably causes errors later in the page. This is the contents of the included file: class dbAccess { private $db; function __construct() { $this->db = mysql_connect("externalURL","user","password") or die ("Connection for database not found."); mysql_select_db("dbName", $this->db) or die ("Database not selected"); } ... } As a note for what's happening here. I'm attempting to use the database on my site, rather than the local machine. The actual values of externalURL, dbName, user, and password work when I plug them into my database software and the file fine of the site, so I don't think the issue is with those values themselves, but rather some setting with Apache or Wamp. The issue persists if I try a local database as well. This is an excerpt of my Apache error log for one attempt at logging into the site: [Wed Apr 14 15:32:54 2010] [notice] Parent: child process exited with status 255 -- Restarting. [Wed Apr 14 15:32:54 2010] [notice] Apache/2.2.11 (Win32) PHP/5.3.0 configured -- resuming normal operations [Wed Apr 14 15:32:54 2010] [notice] Server built: Dec 10 2008 00:10:06 [Wed Apr 14 15:32:54 2010] [notice] Parent: Created child process 1756 [Wed Apr 14 15:32:55 2010] [notice] Child 1756: Child process is running [Wed Apr 14 15:32:55 2010] [notice] Child 1756: Acquired the start mutex. [Wed Apr 14 15:32:55 2010] [notice] Child 1756: Starting 64 worker threads. [Wed Apr 14 15:32:55 2010] [notice] Child 1756: Starting thread to listen on port 80. For the first line, isLoggedIn (from the first line) is a method of the class above. Assuming that the connection works, this error should disappear. I've tried searching for solutions online, but haven't been able to find anything to help. If you need any further information to help solve this issue, feel free to ask for it. (One more note, I don't even have Skype on this computer, so I can't see it being an issue, as this conflict seems to be the default response for any Wamp issue.) [Edit: Removed entry from the error log as it was solved as an unrelated issue]

    Read the article

  • More GCC link time issues: undefined reference to main

    - by vikramtheone
    Hi Guys, I'm writing software for a Cortex-A8 processor and I have to write some ARM assembly code to access specific registers. I'm making use of the gnu compilers and related tool chains, these tools are installed on the processor board(Freescale i.MX515) with Ubuntu. I make a connection to it from my host PC(Windows) using WinSCP and the PuTTY terminal. As usual I started with a simple C project having main.c and functions.s. I compile the main.c using GCC, assemble the functions.s using as and link the generated object files using once again GCC, but I get strange errors during this process. An important finding - Meanwhile, I found out that my assembly code may have some issues because when I individually assemble it using the command as -o functions.o functions.s and try running the generated functions.o using ./functions.o command, the bash shell is failing to recognize this file as an executable(on pressing tab functions.o is not getting selected/PuTTY is not highlighting the file). Can anyone suggest whats happening here? Are there any specific options I have to send, to GCC during the linking process? The errors I see are strange and beyond my understanding, I don't understand to what the GCC is referring. I'm pasting here the contents of main.c, functions.s, the Makefile and the list of errors. Help, please!!! Vikram main.c #include <stdio.h> #include <stdlib.h> int main(void) { puts("!!!Hello World!!!"); /* prints !!!Hello World!!! */ return EXIT_SUCCESS; } functions.s * Main program */ .equ STACK_TOP, 0x20000800 .text .global _start .syntax unified _start: .word STACK_TOP, start .type start, function start: movs r0, #10 movs r1, #0 .end Makefile all: hello hello: main.o functions.o gcc -o main.o functions.o main.o: main.c gcc -c -mcpu=cortex-a8 main.c functions.o: functions.s as -mcpu=cortex-a8 -o functions.o functions.s Errors ubuntu@ubuntu-desktop:~/Documents/Project/Others/helloworld$ make gcc -c -mcpu=cortex-a8 main.c as -mcpu=cortex-a8 -o functions.o functions.s gcc -o main.o functions.o functions.o: In function `_start': (.text+0x0): multiple definition of `_start' /usr/lib/gcc/arm-linux-gnueabi/4.3.3/../../../crt1.o:init.c:(.text+0x0): first defined here /usr/lib/gcc/arm-linux-gnueabi/4.3.3/../../../crt1.o: In function `_start': init.c:(.text+0x30): undefined reference to `main' collect2: ld returned 1 exit status make: *** [hello] Error 1

    Read the article

  • How do you stop scripters from slamming your website hundreds of times a second?

    - by davebug
    [update] I've accepted an answer, as lc deserves the bounty due to the well thought-out answer, but sadly, I believe we're stuck with our original worst case scenario: CAPTCHA everyone on purchase attempts of the crap. Short explanation: caching / web farms make it impossible for us to actually track hits, and any workaround (sending a non-cached web-beacon, writing to a unified table, etc.) slows the site down worse than the bots would. There is likely some pricey bit of hardware from Cisco or the like that can help at a high level, but it's hard to justify the cost if CAPTCHAing everyone is an alternative. I'll attempt to do a more full explanation in here later, as well as cleaning this up for future searchers (though others are welcome to try, as it's community wiki). I've added bounty to this question and attempted to explain why the current answers don't fit our needs. First, though, thanks to all of you who have thought about this, it's amazing to have this collective intelligence to help work through seemingly impossible problems. I'll be a little more clear than I was before: This is about the bag o' crap sales on woot.com. I'm the president of Woot Workshop, the subsidiary of Woot that does the design, writes the product descriptions, podcasts, blog posts, and moderates the forums. I work in the css/html world and am only barely familiar with the rest of the developer world. I work closely with the developers and have talked through all of the answers here (and many other ideas we've had). Usability of the site is a massive part of my job, and making the site exciting and fun is most of the rest of it. That's where the three goals below derive. CAPTCHA harms usability, and bots steal the fun and excitement out of our crap sales. To set up the scenario a little more, bots are slamming our front page tens of times a second screenscraping (and/or scanning our rss) for the Random Crap sale. The moment they see that, it triggers a second stage of the program that logs in, clicks I want One, fills out the form, and buys the crap. In current (2/6/2009) order of votes: lc: On stackoverflow and other sites that use this method, they're almost always dealing with authenticated (logged in) users, because the task being attempted requires that. On Woot, anonymous (non-logged) users can view our home page. In other words, the slamming bots can be non-authenticated (and essentially non-trackable except by IP address). So we're back to scanning for IPs, which a) is fairly useless in this age of cloud networking and spambot zombies and b) catches too many innocents given the number of businesses that come from one IP address (not to mention the issues with non-static IP ISPs and potential performance hits to trying to track this). Oh, and having people call us would be the worst possible scenario. Can we have them call you? BradC Ned Batchelder's methods look pretty cool, but they're pretty firmly designed to defeat bots built for a network of sites. Our problem is bots are built specifically to defeat our site. Some of these methods could likely work for a short time until the scripters evolved their bots to ignore the honeypot, screenscrape for nearby label names instead of form ids, and use a javascript-capable browser control. lc again "Unless, of course, the hype is part of you

    Read the article

  • Should I deal with files longer than MAX_PATH?

    - by John
    Just had an interesting case. My software reported back a failure caused by a path being longer than MAX_PATH. The path was just a plain old document in My Documents, e.g.: C:\Documents and Settings\Bill\Some Stupid FOlder Name\A really ridiculously long file thats really very very very..........very long.pdf Total length 269 characters (MAX_PATH==260). The user wasn't using a external hard drive or anything like that. This was a file on an Windows managed drive. So my question is this. Should I care? I'm not saying can I deal with the long paths, I'm asking should I. Yes I'm aware of the "\?\" unicode hack on some Win32 APIs, but it seems this hack is not without risk (as it's changing the behaviour of the way the APIs parse paths) and also isn't supported by all APIs . So anyway, let me just state my position/assertions: First presumably the only way the user was able to break this limit is if the app she used uses the special Unicode hack. It's a PDF file, so maybe the PDF tool she used uses this hack. I tried to reproduce this (by using the unicode hack) and experimented. What I found was that although the file appears in Explorer, I can do nothing with it. I can't open it, I can't choose "Properties" (Windows 7). Other common apps can't open the file (e.g. IE, Firefox, Notepad). Explorer will also not let me create files/dirs which are too long - it just refuses. Ditto for command line tool cmd.exe. So basically, one could look at it this way: a rouge tool has allowed the user to create a file which is not accessible by a lot of Windows (e.g. Explorer). I could take the view that I shouldn't have to deal with this. (As an aside, this isn't an vote of approval for a short max path length: I think 260 chars is a joke, I'm just saying that if Windows shell and some APIs can't handle 260 then why should I?). So, is this a fair view? Should I say "Not my problem"? Thanks! John

    Read the article

  • Constraint Satisfaction Problem

    - by Carl Smotricz
    I'm struggling my way through Artificial Intelligence: A Modern Approach in order to alleviate my natural stupidity. In trying to solve some of the exercises, I've come up against the "Who Owns the Zebra" problem, Exercise 5.13 in Chapter 5. This has been a topic here on SO but the responses mostly addressed the question "how would you solve this if you had a free choice of problem solving software available?" I accept that Prolog is a very appropriate programming language for this kind of problem, and there are some fine packages available, e.g. in Python as shown by the top-ranked answer and also standalone. Alas, none of this is helping me "tough it out" in a way as outlined by the book. The book appears to suggest building a set of dual or perhaps global constraints, and then implementing some of the algorithms mentioned to find a solution. I'm having a lot of trouble coming up with a set of constraints suitable for modelling the problem. I'm studying this on my own so I don't have access to a professor or TA to get me over the hump - this is where I'm asking for your help. I see little similarity to the examples in the chapter. I was eager to build dual constraints and started out by creating (the logical equivalent of) 25 variables: nationality1, nationality2, nationality3, ... nationality5, pet1, pet2, pet3, ... pet5, drink1 ... drink5 and so on, where the number was indicative of the house's position. This is fine for building the unary constraints, e.g. The Norwegian lives in the first house: nationality1 = { :norway }. But most of the constraints are a combination of two such variables through a common house number, e.g. The Swede has a dog: nationality[n] = { :sweden } AND pet[n] = { :dog } where n can range from 1 to 5, obviously. Or stated another way: nationality1 = { :sweden } AND pet1 = { :dog } XOR nationality2 = { :sweden } AND pet2 = { :dog } XOR nationality3 = { :sweden } AND pet3 = { :dog } XOR nationality4 = { :sweden } AND pet4 = { :dog } XOR nationality5 = { :sweden } AND pet5 = { :dog } ...which has a decidedly different feel to it than the "list of tuples" advocated by the book: ( X1, X2, X3 = { val1, val2, val3 }, { val4, val5, val6 }, ... ) I'm not looking for a solution per se; I'm looking for a start on how to model this problem in a way that's compatible with the book's approach. Any help appreciated.

    Read the article

< Previous Page | 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226  | Next Page >