Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 279/457 | < Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >

  • What useful GDB scripts have you used/written?

    - by nik
    People use gdb on and off for debugging, of course there are lots of other debugging tools across the varied OSes, with and without GUI and, maybe other fancy IDE features. I would like to know what useful gdb scripts you have written and liked. While, I do not mean a dump of commands in a something.gdb file that you source to pull out a bunch of data, if that made your day, go ahead and talk about it. Lets think conditional processing, control loops and functions written for more elegant and refined programming to debug and, maybe even for whitebox testing Things get interesting when you start debugging remote systems (say, over a serial/ethernet interface) And, what if the target is a multi-processor (and, multithreaded) system Let me put a simple case as an example... Say, A script that traversed serially over entries to locate a bad entry in a large hash-table that is implemented on an embedded platform. That helped me debug a broken hash-table once.

    Read the article

  • How to investigate if opencl is possible for an algorithm

    - by Marnix
    I have a heavy-duty algorithm in C# that takes two large Bitmaps of about 10000x5000 and performs photo and ray collision operations on a 3D model to map photos on the 3D model. I would like to know if it is possible to convert such an algorithm to OpenCL to optimize parallel operations during the algorithm. But before asking you to go into the details of the algorithm, I would like to know how I can investigate if my algorithm is convertible to OpenCL. I am not experienced in OpenCL and I would like to know if it is worth it to get into it and learn how it works. Are there things I have to look for that will definitely not work on the graphics card? (for-loops, recursion) Update: My algorithm goes something like: foreach photo split the photo in 64x64 blocks foreach block cast a ray from the camera to the 3D model foreach triangle in 3D model perform raycheck

    Read the article

  • Documenting user scenarios and measuring/testing

    - by Rimian
    Please forgive me as I don't quite remember the exact terms for what I am talking about... hence my question. Recently I worked on a large Agile team where I encountered a method of defining user scenarios (much like user stories). These scenarios were a few very basic short sentences with keywords and a structure that could be understood by humans (especially project managers) and could also be coded against using some Java Framework (for verifying tests). The exact structure of this mini language used keywords like "when" and "and" or "if" which was how the framework parsed and verified the result. The purpose of this framework was to interface between management and the acceptance testing framework. So essentially management could write the tests themselves using English. The scenario went something like this: "When a user visits URL and User clicks on X Something happens (that can be measured)" Can anyone help me remember exactly what I am talking about? Many thanks

    Read the article

  • Best practices for Subversion and Visual Studio projects

    - by Alex Marshall
    I've recently started working on various C# projects in Visual Studio as part of a plan for a large scale system that will be used to replace our current system that's built from a cobbling-together of various programs and scripts written in C and Perl. The projects I'm now working on have reached critical mass for being committed to subversion. I was wondering what should and should not be committed to the repository for Visual Studio projects. I know that it's going to generate various files that are just build-artifacts and don't really need to be committed, and I was wondering if anybody had any advice for properly using SVN with Visual Studio. At the moment, I'm using an SVN 1.6 server with Visual Studio 2010 beta. Any advice, opinions are welcome.

    Read the article

  • Jquery: Preload an image on request with a callback function?

    - by tarnfeld
    Hey, I'm helping out a friend with a website he's developing and we are having trouble with preloading images on request. Basically, when the user clicks on a thumbnail of the product a <div> slides down that includes a scrolling carrousel of large images. In total there are about 20MB of images that could be loaded in (if you did them all) so preloading them on the page load would not be an option. Is there a way that we could call a javascript function that begins to preload about 4 images and then when it's done it has a callback function? Thanks! P.S. We are using jQuery on the page...

    Read the article

  • Best practices for organizing .NET P/Invoke code to Win32 APIs

    - by Paul Sasik
    I am refactoring a large and complicated code base in .NET that makes heavy use of P/Invoke to Win32 APIs. The structure of the project is not the greatest and I am finding DllImport statements all over the place, very often duplicated for the same function, and also declared in a variety of ways: The import directives and methods are sometimes declared as public, sometimes private, sometimes as static and sometimes as instance methods. My worry is that refactoring may have unintended consequences but this might be unavoidable. Are there documented best practices I can follow that can help me out? My instict is to organize a static/shared Win32 P/Invoke API class that lists all of these methods and associated constants in one file... (The code base is made up of over 20 projects with a lot of windows message passing and cross-thread calls. It's also a VB.NET project upgraded from VB6 if that makes a difference.)

    Read the article

  • Java: where should I put anonymous listener logic code?

    - by tulskiy
    Hi, we had a debate at work about what is the best practice for using listeners in java: whether listener logic should stay in the anonymous class, or it should be in a separate method, for example: button.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { // code here } }); or button.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { buttonPressed(); } }); private void buttonPressed() { // code here } which is the recommended way in terms of readability and maintainability? I prefer to keep the code inside the listener and only if gets too large, make it an inner class. Here I assume that the code is not duplicated anywhere else. Thank you.

    Read the article

  • Best way to produce automated exports in tab-delimited form from Teradata?

    - by Cade Roux
    I would like to be able to produce a file by running a command or batch which basically exports a table or view (SELECT * FROM tbl), in text form (default conversions to text for dates, numbers, etc are fine), tab-delimited, with NULLs being converted to empty field (i.e. a NULL colum would have no space between tab characters, with appropriate line termination (CRLF or Windows), preferably also with column headings. This is the same export I can get in SQL Assistant 12.0, but choosing the export option, using tab delimiter, setting my NULL value to '' and including column headings. I have been unable to find the right combination of options - the closest I have gotten is by building a single column with CAST and '09'XC, but the rows still have a leading 2-byte length indicator in most settings I have tried. I would prefer not to have to build large strings for the various different tables.

    Read the article

  • Can JAXB Incrementally Marshall An Object?

    - by Intellectual Tortoise
    I've got a fairly simple, but potentially large structure to serialize. Basically the structure of the XML will be: <simple_wrapper> <main_object_type> <sub_objects> </main_object_type> ... main_object_type repeats up to 5,000 times </simple_wrapper> The main_object_type can have a significant amount of data. On my first 3,500 record extract, I had to give the JVM way more memory than it should need. So, I'd like to write out to disk after each (or a bunch of) main_object_type. I know that setting Marshaller.JAXB_FRAGMENT would allow it fragments, but I loose the outer xml document tags and the <simple_wrapper>. Any suggestions?

    Read the article

  • What makes MVVM uniquely suited to WPF?

    - by Reed Copsey
    The Model-View-ViewModel is very popular with WPF and Silverlight. I've been using this for my most recent projects, and am a very large fan. I understand that it's a refinement of MVP. However, I am wondering exactly what unique characteristics of WPF (and Silverlight) allow MVVM to work, and prevent (or at least make difficult) this pattern from working using other frameworks or technologies. I know MVVM has a strong dependency on the powerful data binding technology within WPF. This is the one feature which many articles and blogs seem to mention as being the key to WPF providing the means of the strong separation of View from ViewModel. However, data binding exists in many forms in other UI frameworks. There are even projects like Truss that provide WPF-style databinding to POCO in .NET. What features, other than data binding, make WPF and Silverlight uniquely suited to Model-View-ViewModel?

    Read the article

  • How do I shrink my SQL Server Database?

    - by Rory Becker
    I have a Database nearly 1.9Gb Database in size MSDE2000 does not allow DBs that exceed 2.0Gb I need to shrink this DB (and many like it at various client locations) I have found and deleted many 100's of 1000's of records which are considered unneeded. these records account for a large percentage of some of the main (largest) tables in the Database. Therefore it's reasonable to assume much space should now be retrievable. So now I need to shrink the DB to account for the missing records I execute "DBCC ShrinkDatabase('MyDB')"......No effect. I have tried the various shrink facilities provided in MSSMS.... Still no effect. I have backed up the database and restored it... Still no effect. Still 1.9Gb Why? Whatever procedure I eventually find needs to be replayable on a client machine with access to nothing other than OSql or similar.

    Read the article

  • What is the best database for my needs?

    - by Mr. Flibble
    I am currently using MS SQL Server 2008 but I'm not sure it it is the best system for this particular task. I have a single table like so: PK_ptA PK_ptB DateInserted LookupColA LookupColB ... LookupColF DataCol (ntext) A common query is SELECT TOP(1000000) DataCol FROM table WHERE LookupColA=x AND LookupColD=y AND LookupColE=z ORDER BY DateInserted DESC The table has about a billion rows with 5 million inserted per day. My main problem with SQL Server is that it isn't too easy to shard or spread out the datafiles. Also, exporting seems to max out at 1000rows per second (about 1MB/s) which seems very slow. Another problem I have is, with SQL Server, if I want to add a new LookupCol the log file grows enormously requiring a large amount of rarely used free space on tap. Are there any obvious better solutions for this problem?

    Read the article

  • how to do this in shell

    - by user150674
    I have a very large file, named 'ColCheckMe', tab-delimited, that you are asked to process. You are told that each line in 'ColCheckMe' has 7 columns, and that the values in the 5th column are integers. Using shell functions indicate how you would verify that these conditions are satisfied in 'ColCheckMe' if In the same file, each value in column 1 is unique. How would I verify that? Also how to write a shell function that counts the number of occurrences of the word “SpecStr” in the file 'ColCheckMe' I tried the first part which checks for the valid number of field and checks the 5th field being integer field. nawk ' NF != 7 { printf("[%d] has invalid [%d] number of fields\n", FNR, NF) } $5 !~ /^[0-9]+$/ { printf("[%d] 5th field is invalid [%s]\n", FNR, $5) }' ColCheckMe now i wanna verify in the same file if the value in column 1 is unique. Also is there a way to write a shell function to count the occurrences of the world "SpecStr" in the file 'ColCheckMe' Thanks a lot

    Read the article

  • Questions every good PHP Developer should be able to answer

    - by Rachel
    I was going through Questions every good .Net developer should be able to answer and was highly impressed with the content and approach of this question and so in the same spirit, I am asking this question for PHP Developer. What questions do you think should a good PHP programmer be able to respond to? EDIT : I am marking this question as community wiki as it is not user specific and it aims to serve programming community at large. Looking forward for some amazing responses. NOTE : Please answer questions too as suggested in the comments so that people could learn something new too regarding the language.

    Read the article

  • Client side thumb creation OR Server side?

    - by Totty
    Hy, I have two options to choose from: Client side: pro: image manipulations occurs on the client side, so no load on the server cons: more uploaded data Server side: pro: less uploaded data cons: image manipulations occurs on the server side, so there are some load and will be queried... For example, when you upload an image, you will get 4 images: a large image, medium, thumb1, thumb2, so in the case of the client side will be needed to upload the 4 optimized images. For the server side, will be only uploaded 1 optimized image and then manipulated. What is better and less consuming way?

    Read the article

  • How to calculate bandwidth consumption for a Hosting Account using C# in ASP.Net Application?

    - by Steve Johnson
    HI all, I am working on SaaS Hosting Software. a large number of sites are hosted on the server. I am trying to calculate bandwidth consumption, (bytes transferred in and out) using C#, described Here using the MS Log Parser. In the above case, if the log files are deleted by the user or any administrator even, the bandwidth calculation will not be possible. Q1: *What is the standard way to measure the Bandwidth for various Hosting accounts (of websites) on a single server?* Q2: *If Log parser mechanism (as described above) is used, then how to take care of the security issue? Is there some system directory or event viewer logs or something which cannot be deleted except by the System account and contains bandwidth data?* Please point me in the right direction. Thanks

    Read the article

  • GetVirtualPath not making any sense.

    - by HeavyWave
    Can anyone explain why this code with the given routes returns the first route? routes.MapRoute(null, "user/approve", new { controller = "Users", action = "Approve" }), routes.MapRoute(null, "user/{username}", new { controller = "Users", action = "Profile" }), routes.MapRoute(null, "user/{username}/{action}", new { controller = "Users" }), routes.MapRoute(null, "user/{username}/{action}/{id}", new { controller = "Users" }), routes.MapRoute(null, "search/{query}", new { controller = "Artists", action = "Search", page = 1 }), routes.MapRoute(null, "search/{query}/{page}", new { controller = "Artists", action = "Search" }), routes.MapRoute(null, "music", new { controller = "Artists", action = "Index", page = 1 }), routes.MapRoute(null, "music/page/{page}", new { controller = "Artists", action = "Index" }) var pageLinkValueDictionary = new RouteValueDictionary(this.linkWithoutPageValuesDictionary); pageLinkValueDictionary.Add("page", pageNumber); var virtualPathData = RouteTable.Routes.GetVirtualPath(this.viewContext.RequestContext, pageLinkValueDictionary); Here GetVirtualPath always returns user/approve, although there is no {page} parameter in the route. Furthermore, everything works as expected without the first route. I have found this link http://www.codeplex.com/aspnet/WorkItem/View.aspx?WorkItemId=2033 but it wasn't very helpful. It looks like GetVirtualPath was not implemented with large collections of routes in mind. I am using ASP.Net MVC 1.0.

    Read the article

  • SQL Server 2000 incorrect message: duplicate key with unique index

    - by iar
    Database server: SQL Server 2000 - 8.00.760 - SP3 - Standard Edition I have a table with 5 columns, in which I want to add a large (300.000) number of records. There is a unique index on the combination of two columns. If I add the records with this unique index in place, SQL Server gives this error message: Msg 2601, Level 14, State 3, Line 1 Cannot insert duplicate key row in object 'TESTTABLE' with unique index 'test'. The statement has been terminated. (0 row(s) affected) However, if I delete the unique index and then add the records, I do not get any error when I create this unique index afterwards.

    Read the article

  • MySQL to AppEngine

    - by Daniel Naito
    Hi Nick! How are you? I'm from Brazil and study at FATEC (college located in Brazil). I'm trying to learn about AppEngine. Now, I'm trying to load a large database from MySQL to AppEngine to perform some queries, but I don't know how i can do it. I did some testing with CSV files,but is there any way to perform the direct import from MySQL? This database is from Pentaho BI Server (www.pentaho.com). Thank you for your attention. Regards, Daniel Naito

    Read the article

  • Unique ID for WORD2007 paragraph

    - by Ganish
    Hello, I am writing large WORD2007 socuments, which are often being changed. I ahve to number paragraphs with stationary unique unmbers, that will not change while changing the documents. The numbers should be unique, and will not change even if previous numbers are deleted. The order of the list is not mandatory, and addition of a new number before existing numbers is possible (for instance: the sequence 1, 4, 3 means that paragraphs 1-3 were written, then #2 was deleted, then #5 was added. #3 was not affected by the later editing) The mechanism should be internal to the document, as I am working on line and off line. The numbers are allocated to every document indovidually. Since I don't know to program under WORD, I'd appreciate getting complete solution. REgards Ganish

    Read the article

  • Keystrokes in Winforms app causing window to close unexpectedly

    - by Paul Sasik
    I have a strange issue that has arisen recently: Whenever I enter text, even a single character, into a textbox in any Form in my application it causes the form and its parent to close. I've checked for the following so far: Errant/mis-assigned event handlers that may be interpreting a keystroke as a Form cancel I am using keypreview in several windows but debugging shows this to not be a cause Happens in any form of the application Happens even with brand new text boxes dropped on the form Tried removing the WithEvents declaration from text box declarations (VB.NET) The result is DialogResult.Cancel when I break the code after Show or ShowDialog The forms do not use AcceptButton or CancelButton properties (set to none) Note: I am modifying a large codebase with a lot of code that I have yet to touch What else could be causing this strange behavior?

    Read the article

  • How can let Qt Graphics View Framework support custom layers

    - by jnblue
    Qt's graphics view frameworks is very powerful, but I have not found a way to support custom layers. In Qt, there is a QGraphicsScene::ItemLayer,but QGraphicsScene renders all items are in this layer. I want manage the items with several layers, Just like Illustrator and CorelDraw. all the item only in the current layer will receive the event, be selected or get the key focus etc.. Other layers(not current layer) will not receive all scene event. The most reasons of using layers is I could catalogue a large number of items more clearly.And without needing transfer events to all the layers' items ,I think the graphics frameworks will be more efficient. The last question, does QGraphicsView support rendering server stacked graphics scenes at the same time? If support, I think the "custom layers" can be solved in this way. Thanks very much!

    Read the article

  • T-SQL Dynamic SQL and Temp Tables

    - by George
    It looks like #temptables created using dynamic SQL via the EXECUTE string method have a different scope and can't be referenced by "fixed" SQLs in the same stored procedure. However, I can reference a temp table created by a dynamic SQL statement in a subsequence dynamic SQL but it seems that a stored procedure does not return a query result to a calling client unless the SQL is fixed. A simple 2 table scenario: I have 2 tables. Let's call them Orders and Items. Order has a Primary key of OrderId and Items has a Primary Key of ItemId. Items.OrderId is the foreign key to identify the parent Order. An Order can have 1 to n Items. I want to be able to provide a very flexible "query builder" type interface to the user to allow the user to select what Items he want to see. The filter criteria can be based on fields from the Items table and/or from the parent Order table. If an Item meets the filter condition including and condition on the parent Order if one exists, the Item should be return in the query as well as the parent Order. Usually, I suppose, most people would construct a join between the Item table and the parent Order tables. I would like to perform 2 separate queries instead. One to return all of the qualifying Items and the other to return all of the distinct parent Orders. The reason is two fold and you may or may not agree. The first reason is that I need to query all of the columns in the parent Order table and if I did a single query to join the Orders table to the Items table, I would be repoeating the Order information multiple times. Since there are typically a large number of items per Order, I'd like to avoid this because it would result in much more data being transfered to a fat client. Instead, as mentioned, I would like to return the two tables individually in a dataset and use the two tables within to populate a custom Order and child Items client objects. (I don't know enough about LINQ or Entity Framework yet. I build my objects by hand). The second reason I would like to return two tables instead of one is because I already have another procedure that returns all of the Items for a given OrderId along with the parent Order and I would like to use the same 2-table approach so that I could reuse the client code to populate my custom Order and Client objects from the 2 datatables returned. What I was hoping to do was this: Construct a dynamic SQL string on the Client which joins the orders table to the Items table and filters appropriate on each table as specified by the custom filter created on the Winform fat-client app. The SQL build on the client would have looked something like this: TempSQL = " INSERT INTO #ItemsToQuery OrderId, ItemsId FROM Orders, Items WHERE Orders.OrderID = Items.OrderId AND /* Some unpredictable Order filters go here */ AND /* Some unpredictable Items filters go here */ " Then, I would call a stored procedure, CREATE PROCEDURE GetItemsAndOrders(@tempSql as text) Execute (@tempSQL) --to create the #ItemsToQuery table SELECT * FROM Items WHERE Items.ItemId IN (SELECT ItemId FROM #ItemsToQuery) SELECT * FROM Orders WHERE Orders.OrderId IN (SELECT DISTINCT OrderId FROM #ItemsToQuery) The problem with this approach is that #ItemsToQuery table, since it was created by dynamic SQL, is inaccessible from the following 2 static SQLs and if I change the static SQLs to dynamic, no results are passed back to the fat client. 3 around come to mind but I'm look for a better one: 1) The first SQL could be performed by executing the dynamically constructed SQL from the client. The results could then be passed as a table to a modified version of the above stored procedure. I am familiar with passing table data as XML. If I did this, the stored proc could then insert the data into a temporary table using a static SQL that, because it was created by dynamic SQL, could then be queried without issue. (I could also investigate into passing the new Table type param instead of XML.) However, I would like to avoid passing up potentially large lists to a stored procedure. 2) I could perform all the queries from the client. The first would be something like this: SELECT Items.* FROM Orders, Items WHERE Order.OrderId = Items.OrderId AND (dynamic filter) SELECT Orders.* FROM Orders, Items WHERE Order.OrderId = Items.OrderId AND (dynamic filter) This still provides me with the ability to reuse my client sided object-population code because the Orders and Items continue to be returned in two different tables. I have a feeling to, that I might have some options using a Table data type within my stored proc, but that is also new to me and I would appreciate a little bit of spoon feeding on that one. If you even scanned this far in what I wrote, I am surprised, but if so, I woul dappreciate any of your thoughts on how to accomplish this best.

    Read the article

  • Recommendations for 'C' Project architecture guidlines?

    - by SiegeX
    Now that I got my head wrapped around the 'C' language to a point where I feel proficient enough to write clean code, I'd like to focus my attention on project architecture guidelines. I'm looking for a good resource that coves the following topics: How to create an interface that promotes code maintainability and is extensible for future upgrades. Library creation guidelines. Example, when should I consider using static vs dynamic libraries. How to properly design an ABI to cope with either one. Header files: what to partition out and when. Examples on when to use 1:1 vs 1:many .h to .c Anything you feel I missed but is important when attempting to architect a new C project. Ideally, I'd like to see some example projects ranging from small to large and see how the architecture changes depending on project size, function or customer. What resource(s) would you recommend for such topics? Thanks

    Read the article

  • Python objects as userdata in ctypes callback functions

    - by flight
    The C function myfunc operates on a larger chunk of data. The results are returned in chunks to a callback function: int myfunc(const char *data, int (*callback)(char *result, void *userdata), void *userdata); Using ctypes, it's no big deal to call myfunc from Python code, and to have the results being returned to a Python callback function. This callback work fine. myfunc = mylib.myfunc myfunc.restype = c_int myfuncFUNCTYPE = CFUNCTYPE(STRING, c_void_p) myfunc.argtypes = [POINTER(c_char), callbackFUNCTYPE, c_void_p] def mycb(result, userdata): print result return True input="A large chunk of data." myfunc(input, myfuncFUNCTYPE(mycb), 0) But, is there any way to give a Python object (say a list) as userdata to the callback function? In order to store away the result chunks, I'd like to do e.g.: def mycb(result, userdata): userdata.append(result) userdata=[] But I have no idea how to cast the Python list to a c_void_p, so that it can be used in the call to myfunc. My current workaround is to implement a linked list as a ctypes structure, which is quite cumbersome.

    Read the article

< Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >