Search Results

Search found 10417 results on 417 pages for 'large'.

Page 352/417 | < Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >

  • Does anyone still believe in the Capability Maturity Model for Software?

    - by Ed Guiness
    Ten years ago when I first encountered the CMM for software I was, I suppose like many, struck by how accurately it seemed to describe the chaotic "level one" state of software development in many businesses, particularly with its reference to reliance on heroes. It also seemed to provide realistic guidance for an organisation to progress up the levels improving their processes. But while it seemed to provide a good model and realistic guidance for improvement, I never really witnessed an adherence to CMM having a significant positive impact on any organisation I have worked for, or with. I know of one large software consultancy that claims CMM level 5 - the highest level - when I can see first hand that their processes are as chaotic, and the quality of their software products as varied, as other, non-CMM businesses. So I'm wondering, has anyone seen a real, tangible benefit from adherence to process improvement according to CMM? And if you have seen improvement, do you think that the improvement was specifically attributable to CMM, or would an alternative approach (such as six-sigma) have been equally or more beneficial? Does anyone still believe? As an aside, for those who haven't yet seen it, check out this funny-because-its-true parody

    Read the article

  • Slow T-SQL query, convert to LINQ to Object

    - by yimbot
    I have a T-SQL query which populates a DataSet from an MSSQL database. string qry = "SELECT * FROM EvnLog AS E WHERE TimeDate = (SELECT MAX(TimeDate) From EvnLog WHERE Code = E.Code) AND (Event = 8) AND (TimeDate BETWEEN '" + Start + "' AND '" + Finish + "')" The database is quite large and being the type of nested query it is, the Data Adapter can take a number of minutes to fill the DataSet. I have extended the DataAdapter's timeout value to 480 seconds to combat it, but the client still complains about slow performance and occassional timeouts. To combat this, I was considering executing a simpler query (ie. just taking the date range) and then populating a Generic List which I could then execute a Linq query against. The simple query executes very quickly which is great. However, I cannot seem to build a Linq query which generates the same results as the T-SQL query above. Is this the best solution to this problem? Can anyone provide tips on rewriting the above T-SQL into Linq? I have also considered using a DataView, but cannot seem to get the results from that either.

    Read the article

  • How can I efficiently manipulate 500k records in SQL Server 2005?

    - by cdeszaq
    I am getting a large text file of updated information from a customer that contains updates for 500,000 users. However, as I am processing this file, I often am running into SQL Server timeout errors. Here's the process I follow in my VB application that processes the data (in general): Delete all records from temporary table (to remove last month's data) (eg. DELETE * FROM tempTable) Rip text file into the temp table Fill in extra information into the temp table, such as their organization_id, their user_id, group_code, etc. Update the data in the real tables based on the data computed in the temp table The problem is that I often run commands like UPDATE tempTable SET user_id = (SELECT user_id FROM myUsers WHERE external_id = tempTable.external_id) and these commands frequently time out. I have tried bumping the timeouts up to as far as 10 minutes, but they still fail. Now, I realize that 500k rows is no small number of rows to manipulate, but I would think that a database purported to be able to handle millions and millions of rows should be able to cope with 500k pretty easily. Am I doing something wrong with how I am going about processing this data? Please help. Any and all suggestions welcome.

    Read the article

  • Is there bug in the Matrix.RotateAt method for certain angles? .Net Winforms

    - by Jules
    Here's the code i'm using to rotate: Dim m As New System.Drawing.Drawing2D.Matrix Dim size = image.Size m.RotateAt(degreeAngle, New PointF(CSng(size.Width / 2), CSng(size.Height / 2))) Dim temp As New Bitmap(600, 600, Imaging.PixelFormat.Format32bppPArgb) Dim g As Graphics = Graphics.FromImage(temp) g.Transform = m g.DrawImage(image, 0, 0) (1) Disposals removed for brevity. (2) I test the code with a 200 x 200 rectangle. (3) Size 600,600 it just an arbitrary large value that I know will fit the right and bottom sides of the rotated image for testing purposes. (4) I know, with this code, the top and left edges will be clipped because I'm not transforming the orgin after the rotate. The problem only occurs at certain angles: (1) At 90, the right hand edge disappears completely. (2) At 180, the right and bottom edges are there, but very faded. (3) At 270, the bottom edge disappears completely. Is this a known bug? If I manually rotate the corners an draw the image by specifying an output rectangle, I don't get the same problem - though it is slightly slower than using RotateAt.

    Read the article

  • Avoiding resource (localizable string) duplication with String.Format

    - by Hrvoje Prgeša
    I'm working on a application (.NET, but not relevant) where there is large potential for resource/string duplication - most of these strings are simple like: Volume: 33 Volume: 33 (dB) Volume 33 dB Volume (dB) Command - Volume: 33 (dB) where X, Y and unit are the same. Should I define a new resource for each of the string or is it preferable to use String.Format to simplify some of these, eg.: String.Format("{0}: {1}", Resource.Volume, 33) String.Format("{0}: {1} {2}", Resource.Volume, 33, Resource.dB) Resource.Volume String.Format("{0} ({1})", 33, Resource.dB) String.Format("{0} ({1})", Resource.Volume, Resource.dB) String.Format("Command - {0}: {1} {2}", Resource.Volume, 33, Resource.dB) I would also define string formats like "{0}: {1}" in the resources so there would be a possibility of reordering words... I would not use this approach selectivly and not throughout the whole application.. And how about: String.Format("{0}: {1}", Volume, Resource.Muted_Volume) // = Volume: Muted Resource.Muted_Volume String.Format("{0}: {1} (by user {2})", Volume, Resource.Muted_Volume, "xy") // = Volume: Muted (by user xy) The advantage is cutting the number of resource by the factor of 4-5. Are there any hidden dangers of using this approach? Could someone give me an example (language) where this would not work correctly?

    Read the article

  • AWK: compare apache dates without using regular expression

    - by smallmeans
    I'm writing a loganalysis application and wanted to grab apache log records between two certain dates. Assume that a date is formated as such: 22/Dec/2009:00:19 (day/month/year:hour:minute) Currently, I'm using a regular expression to replace the month name with its numeric value, remove the separators, so the above date is converted to: 221220090019 making a date comparison trivial.. but.. Running a regex on each record for large files, say, one containing a quarter million records, is extremely costly.. is there any other method not involving regex substitution? Thanks in advance Edit: here's the function doing the convertion/comparison function dateInRange(t, from, to) { sub(/[[]/, "", t); split(t, a, "[/:]"); match("JanFebMarAprMayJunJulAugSepOctNovDec", a[2]); a[2] = sprintf("%02d", (RSTART + 2) / 3); s = a[3] a[2] a[1] a[4] a[5]; return s >= from && s <= to; } "from" and "to" are the intervals in the aforementioned format, and "t" is the raw apache log date/time field (e.g [22/Dec/2009:00:19:36)

    Read the article

  • How to define 2-bit numbers in C, if possible?

    - by Eddy
    For my university process I'm simulating a process called random sequential adsorption. One of the things I have to do involves randomly depositing squares (which cannot overlap) onto a lattice until there is no more room left, repeating the process several times in order to find the average 'jamming' coverage %. Basically I'm performing operations on a large array of integers, of which 3 possible values exist: 0, 1 and 2. The sites marked with '0' are empty, the sites marked with '1' are full. Initially the array is defined like this: int i, j; int n = 1000000000; int array[n][n]; for(j = 0; j < n; j++) { for(i = 0; i < n; i++) { array[i][j] = 0; } } Say I want to deposit 5*5 squares randomly on the array (that cannot overlap), so that the squares are represented by '1's. This would be done by choosing the x and y coordinates randomly and then creating a 5*5 square of '1's with the topleft point of the square starting at that point. I would then mark sites near the square as '2's. These represent the sites that are unavailable since depositing a square at those sites would cause it to overlap an existing square. This process would continue until there is no more room left to deposit squares on the array (basically, no more '0's left on the array) Anyway, to the point. I would like to make this process as efficient as possible, by using bitwise operations. This would be easy if I didn't have to mark sites near the squares. I was wondering whether creating a 2-bit number would be possible, so that I can account for the sites marked with '2'. Sorry if this sounds really complicated, I just wanted to explain why I want to do this.

    Read the article

  • Online job-searching is tedious. Help me automate it.

    - by ehsanul
    Many job sites have broken searches that don't let you narrow down jobs by experience level. Even when they do, it's usually wrong. This requires you to wade through hundreds of postings that you can't apply for before finding a relevant one, quite tedious. Since I'd rather focus on writing cover letters etc., I want to write a program to look through a large number of postings, and save the URLs of just those jobs that don't require years of experience. I don't require help writing the scraper to get the html bodies of possibly relevant job posts. The issue is accurately detecting the level of experience required for the job. This should not be too difficult as job posts are usually very explicit about this ("must have 5 years experience in..."), but there may be some issues with overly simple solutions. In my case, I'm looking for entry-level positions. Often they don't say "entry-level", but inclusion of the words probably means the job should be saved. Next, I can safely exclude a job the says it requires "5 years" of experience in whatever, so a regex like /\d\syears/ seems reasonable to exclude jobs. But then, I realized some jobs say they'll take 0-2 years of experience, matches the exclusion regex but is clearly a job I want to take a look at. Hmmm, I can handle that with another regex. But some say "less than 2 years" or "fewer than 2 years". Can handle that too, but it makes me wonder what other patterns I'm not thinking of, and possibly excluding many jobs. That's what brings me here, to find a better way to do this than regexes, if there is one. I'd like to minimize the false negative rate and save all the jobs that seem like they might not require many years of experience. Does excluding anything that matches /[3-9]\syears|1\d\syears/ seem reasonable? Or is there a better way? Training a bayesian filter maybe?

    Read the article

  • "java.lang.OutOfMemoryError: Java heap space" in image and array storage

    - by totalconscience
    I am currently working on an image processing demonstration in java (Applet). I am running into the problem where my arrays are too large and I am getting the "java.lang.OutOfMemoryError: Java heap space" error. The algorithm I run creates an NxD float array where: N is the number of pixel in the image and D is the coordinates of each pixel plus the colorspace components of each pixel (usually 1 for grayscale or 3 for RGB). For each iteration of the algorithm it creates one of these NxD float arrays and stores it for later use in a vector, so that the user of the applet may look at the individual steps. My client wants the program to be able to load a 500x500 RGB image and run as the upper bound. There are about 12 to 20 iterations per run so that means I need to be able to store a 12x500x500x5 float in some fashion. Is there a way to process all of this data and, if possible, how? Example of the issue: I am loading a 512 by 512 Grayscale image and even before the first iteration completes I run out of heap space. The line it points me to is: Y.add(new float[N][D]) where Y is a Vector and N and D are described as above. This is the second instance of the code using that line.

    Read the article

  • Managing StringBuilder Resources

    - by Jim Fell
    My C# (.NET 2.0) application has a StringBuilder variable with a capacity of 2.5MB. Obviously, I do not want to copy such a large buffer to a larger buffer space every time it fills. By that point, there is so much data in the buffer anyways, removing the older data is a viable option. Can anyone see any obvious problems with how I'm doing this (i.e. am I introducing more performance problems than I'm solving), or does it look okay? tText_c = new StringBuilder(2500000, 2500000); private void AppendToText(string text) { if (tText_c.Length * 100 / tText_c.Capacity > 95) { tText_c.Remove(0, tText_c.Length / 2); } tText_c.Append(text); } EDIT: Additional information: In this application new data is received very rapidly (on the order of milliseconds) through a serial connection. I don't want to populate the multiline textbox with this new information so frequently because that kills the performance of the application, so I'm saving it to a StringBuilder. Every so often, the application copies the contents of the StringBuilder to the textbox and wipes out the StringBuilder contents.

    Read the article

  • Application that depends heavily on stored procedures

    - by PieterG
    We currently have an application that depends largely on stored procedures. There is a heavy use of temp tables. It's an extremely large application. Facing this situation, I would like to use Entity Framework or Linq2Sql for a rewrite. I might consider using Fluent Hibernate or Subsonic, as i've used them quite extensively in the past. I've had problems with Linq2Sql generating the return types for the stored procedures because of the usage of the temp tables, and I think it's cumbersome to go and change all the stored procedures from temp tables to in-memory tables. Considering the 2 choices that I want to make, which one of the 2 is the best route to go and why? If my choices are extremely idiotic, please provide alternatives. Edit: The reason for the question and the change is that the data access layer is non-existent and was built 10 years ago. We currently still run into a lot of issues with it. I don't want to divulge too much, but if you saw it, your eyes would start bleeding :)

    Read the article

  • How to Sort a TreeList in Sitecore 6 in the Source

    - by Scott
    My team uses Sitecore 6 as content management system and then .Net to interface with Sitecore API. In many of our templates we make use of a Treelist. When adding a new item to the selected items Treelist it automatically puts the item at the bottom of the list. In some lists they get very large. In most cases end users would like to see these lists sorted descending by a Date field that is part of the templates that can be added as selected to the Treelist. Programmatically on the .Net side its very easy to handle this using Linq OrderByDescending and all displays great in the site to visitors. What I am trying to figure out is how to get it to display the same in Sitecore Content Editor. I've not found anything from Google search other than there seems to be a SortBy you can specify in the source but I tried this and can't get it to have any effect. Has anyone dealt with this before? Again, main goal is to sort items in a Treelist in the Sitecore Content Editor itself. Thanks for any input anyone has.

    Read the article

  • Efficiently select top row for each category in the set

    - by VladV
    I need to select a top row for each category from a known set (somewhat similar to this question). The problem is, how to make this query efficient on the large number of rows. For example, let's create a table that stores temperature recording in several places. CREATE TABLE #t ( placeId int, ts datetime, temp int, PRIMARY KEY (ts, placeId) ) -- insert some sample data SET NOCOUNT ON DECLARE @n int, @ts datetime SELECT @n = 1000, @ts = '2000-01-01' WHILE (@n>0) BEGIN INSERT INTO #t VALUES (@n % 10, @ts, @n % 37) IF (@n % 10 = 0) SET @ts = DATEADD(hour, 1, @ts) SET @n = @n - 1 END Now I need to get the latest recording for each of the places 1, 2, 3. This way is efficient, but doesn't scale well (and looks dirty). SELECT * FROM ( SELECT TOP 1 placeId, temp FROM #t WHERE placeId = 1 ORDER BY ts DESC ) t1 UNION ALL SELECT * FROM ( SELECT TOP 1 placeId, temp FROM #t WHERE placeId = 2 ORDER BY ts DESC ) t2 UNION ALL SELECT * FROM ( SELECT TOP 1 placeId, temp FROM #t WHERE placeId = 3 ORDER BY ts DESC ) t3 The following looks better but works much less efficiently (30% vs 70% according to the optimizer). SELECT placeId, ts, temp FROM ( SELECT placeId, ts, temp, ROW_NUMBER() OVER (PARTITION BY placeId ORDER BY ts DESC) rownum FROM #t WHERE placeId IN (1, 2, 3) ) t WHERE rownum = 1 The problem is, during the latter query execution plan a clustered index scan is performed on #t and 300 rows are retrieved, sorted, numbered, and then filtered, leaving only 3 rows. For the former query three times one row is fetched. Is there a way to perform the query efficiently without lots of unions?

    Read the article

  • Sphinx - delimiters

    - by yoda
    Hi, I would like to know if the Sphinx engine works with any delimiters (like commas and periods in normal MySQL). My question comes from the urge, not to use them at all, but to escape them or at least thay they don't enter in conflict when performing MATCH operations with FULLTEXT searches, since I have problems dealing with them in MySQL by default and I would prefer not to be forced to replace those delimiters by any other characters to provide a good set of results. Sorry if I'm saying something stupid, but I don't have experience with Sphinx or other complementary (?) search engines. To give you an example, if I perform a search with "Passat 2.0 TDI" MySQL by default would identify the period in this case as a delimiter and since the "2" and "0" are too short to be considered words by default, the results would be a bit messed up. Is it easy to handle with Sphinx (or other search engine)? I'm open to suggestions. This is for a large project, with probably more than 500.000 possible records (not trivial at all). Cheers!

    Read the article

  • byte + byte = int... why?

    - by Robert C. Cartaino
    Looking at this C# code... byte x = 1; byte y = 2; byte z = x + y; // ERROR: Cannot implicitly convert type 'int' to 'byte' The result of any math performed on byte (or short) types is implicitly cast back to an integer. The solution is to explicitly cast the result back to a byte, so... byte z = (byte)(x + y); // works What I am wondering is why? Is it architectural? Philosophical? We have: int + int = int long + long = long float + float = float double + double = double So why not: byte + byte = byte short + short = short ? A bit of background: I am performing a long list of calculations on "small numbers" (i.e. < 8) and storing the intermediate results in a large array. Using a byte array (instead of an int array) is faster (because of cache hits). But the extensive byte-casts spread through the code make it that much more unreadable.

    Read the article

  • What's the "correct way" to organize this project?

    - by user571747
    I'm working on a project that allows multiple users to submit large data files and perform operations on them. The "backend" which performs these operations is written in Perl while the "frontend" uses PHP to load HTML template files and determines which content to deliver. Data is stored in a database (MySQL, SQLite, Oracle) and while there is data which has not yet been acted upon, Perl adds it to a running queue which delivers data to other threads based on system load. In addition, there may be pre- and post-processing of the data before and after the main Perl script operates (the specifications are unclear) so I may want to allow these processors to be user-selectable plugins. I had been writing this project in a more procedural fashion but I am quickly realizing the benefit of separating concerns as to limit the scope one change has on the rest of the project. I'm quite unexperienced with design patterns and am curious what the best way to proceed is. I've heard MVC thrown around quite a bit but I am unsure of how to apply it. Specifically, what are some good options to structure this code (in terms of design patterns and folder hierarchy)? How can I achieve this with both PHP and Perl while minimizing duplicated code between languages? Should I keep my PHP files in the top level so I don't have ugly paths in the URL? Also, if I want to provide interchangeable databases, does each table need its own DAO implementation?

    Read the article

  • Error 2025: The supplied DisplayObject must be a child of the caller

    - by Mocca
    Hi, sorry, new to actionscript 3. I have a display() function for an object rotator(image based like a QT object movie). It first saves the current image in a helper variable and then allocates a new image, from the library, beneath the old one. To get a nice crossfade effect, the old image's alpha is looped down via enter_frame and then removed. Which is where there seems to be an issue with the display list, maybe recognizing oldImg's value as being already added? (it's not a first pass error) Btw, do i have to remove the old image or can i leave it, for when it's being called up via the mouse position again? (the image number can get fairly large) Does anyone have further insight? Thanks! function display(num:Number):void //num: image number { ... oldImg = newImg; ClassReference = getDefinitionByName("Class"+num) as Class; imgBD = new ClassReference(0,0); newImg = new Bitmap(imgBD); images.addChild(newImg); newImg.x=0; newImg.y=0; } function onEnter(evt:Event):void { if (oldImg) { if (oldImg.alpha > 0) oldImg.alpha -= 0.15; **else images.removeChild(oldImg);** } ... }

    Read the article

  • "for" loop from program 7.6 from Kochan's "Programming in Objective-C"

    - by Mr_Vlasov
    "The sigma notation is shorthand for a summation. Its use here means to add the values of 1/2^i, where i varies from 1 to n. That is, add 1/2 + 1/4 + 1/8 .... If you make the value of n large enough, the sum of this series should approach 1. Let’s experiment with different values for n to see how close we get." #import "Fraction.h" int main (int argc, char *argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; Fraction *aFraction = [[Fraction alloc] init]; Fraction *sum = [[Fraction alloc] init], *sum2; int i, n, pow2; [sum setTo: 0 over: 1]; // set 1st fraction to 0 NSLog (@"Enter your value for n:"); scanf ("%i", &n); pow2 = 2; for (i = 1; i <= n; ++i) { [aFraction setTo: 1 over: pow2]; sum2 = [sum add: aFraction]; [sum release]; // release previous sum sum = sum2; pow2 *= 2; } NSLog (@"After %i iterations, the sum is %g", n, [sum convertToNum]); [aFraction release]; [sum release]; [pool drain]; return 0; } Question: Why do we have to create additional variable sum2 that we are using in the "for" loop? Why do we need "release previous sum" here and then again give it a value that we just released? : sum2 = [sum add: aFraction]; [sum release]; // release previous sum sum = sum2; Is it just for the sake of avoiding memory leakage? (method "add" initializes a variable that is stored in sum2)

    Read the article

  • Void* array casting to float, int32, int16, etc.

    - by Griffin
    Hey guys, I've got an array of PCM data, it could be 16 bit, 24 bit packed, 32 bit, etc.. It could be signed, or unsigned, and it could be 32 or 64 bit floating point. It is currently stored as a "void**" matrix, indexed by channel, then by frame. The goal is to allow my library to take in any PCM format and buffer it, without requiring manipulation of the data to fit a designated structure. If the A/D converter spits out 24 bit packed arrays of interleaved PCM, I need to accept it gracefully. I also need to support 16 bit non interleaved, as well as any permutation of the above formats. I know the bit depth and other information at runtime, and I'm trying to code efficiently while not duplicating code. What I need is an effective way to cast the matrix, put PCM data into the matrix, and then pull it out later. I can cast the matrix to int32_t, or int16_t for the 32 and 16 bit signed PCM respectively, I'll probably have to store the 24 bit PCM in an int32_t for 32 bit, 8 bit byte systems as well. Can anyone recommend a good way to put data into this array, and pull it out later? I'd like to avoid large sections of code which look like: switch( mFormat ) { case 1: // unsigned 8 bit for( int i = 0; i < mChannels; i++ ) framesArray = (uint8_t*)pcm[i]; break; case 2: // signed 8 bit for( int i = 0; i < mChannels; i++ ) framesArray = (int8_t*)pcm[i]; break; case 3: // unsigned 16 bit ... Limitations: I'm working in C/C++, no templates, no RTTI, no STL. Think embedded. Things get trickier when I have to port this to a DSP with 16 bit bytes. Does anybody have any useful macros they might be willing to share? Thanks, -Griff

    Read the article

  • @dynamic property needs setter with multiple behaviors

    - by ambertch
    I have a class that contains multiple user objects and as such has an array of them as an instance variable: NSMutableArray *users; The tricky part is setting it. I am deserializing these objects from a server via Objective Resource, and for backend reasons users can only be returned as a long string of UIDs - what I have locally is a separate dictionary of users keyed to UIDs. Given the string uidString of comma separated UIDs I override the default setter and populate the actual user objects: @dynamic users; - (void)setUsers:(id)uidString { users = [NSMutableArray arrayWithArray: [[User allUsersDictionary] objectsForKeys:[(NSString*)uidString componentsSeparatedByString:@","]]]; } The problem is this: I now serialize these to database using SQLitePO, which stores these as the array of user objects, not the original string. So when I retrieve it from database the setter mistakenly treats this array of user objects as a string! Where I actually want to adjust the setter's behavior when it gets this object from DB vs. over the network. I can't just make the getter serialize back into a string without tearing up large code that reference this array of user objects, and I tried to detect in the setter whether I have a string or an array coming in: if ([uidString respondsToSelector:@selector(addObject)]) { // Already an array, so don't do anything - just assign users = uidString but no success... so I'm kind of stuck - any suggestions? Thanks in advance!

    Read the article

  • Is this linear search implementation actually useful?

    - by Helper Method
    In Matters Computational I found this interesting linear search implementation (it's actually my Java implementation ;-)): public static int linearSearch(int[] a, int key) { int high = a.length - 1; int tmp = a[high]; // put a sentinel at the end of the array a[high] = key; int i = 0; while (a[i] != key) { i++; } // restore original value a[high] = tmp; if (i == high && key != tmp) { return NOT_CONTAINED; } return i; } It basically uses a sentinel, which is the searched for value, so that you always find the value and don't have to check for array boundaries. The last element is stored in a temp variable, and then the sentinel is placed at the last position. When the value is found (remember, it is always found due to the sentinel), the original element is restored and the index is checked if it represents the last index and is unequal to the searched for value. If that's the case, -1 (NOT_CONTAINED) is returned, otherwise the index. While I found this implementation really clever, I wonder if it is actually useful. For small arrays, it seems to be always slower, and for large arrays it only seems to be faster when the value is not found. Any ideas?

    Read the article

  • is it better to query database or grab from file? php & mysql

    - by pfunc
    I am keeping a large amount of words in a database that I want to match up articles to. I was thinking that it would just be better to keep these words in an array and grab that array whenever needed instead of querying the database every time (since the words won't be changing that much). Is there much performance difference in doing this? And if I were to do this, how to I write a script that writes the array to a a new php file. I tried writing the array like so: while( $row = mysql_fetch_assoc($query)) { $newArray[] = $row; } $fp = fopen('noWordsArr.php', 'w'); fwrite($fp, $newArray); fclose($fp); But all I get in the other file is "Array". So i figured I could write this and then write have a chron hit up the file every few days or so in case things have changed. But I guess if there is no performance advantage then it prob won't be necessary and I can just query the database every time I need to access the words.

    Read the article

  • VS2008 Link Error Using SafeInt3.hpp in 64bit mode.

    - by photo_tom
    I have the below code that links and runs fine in 32bit mode - #include "safeint3.hpp" typedef SafeInt<SIZE_T> SAFE_SIZE_T; SAFE_SIZE_T sizeOfCache; SAFE_SIZE_T _allocateAmt; Where safeint3.hpp is current version that can be found on Codeplex SafeInt. For those who are unaware of it, safeint is a template class that makes working with different integer types and sizes "safe". To quote channel 9 video on software - "it writes the code that you should". Which is my case. I have a class that is managing a large in-memory cache of objects (6gb) and I am very concerned about making sure that I don't have overflow/underflow issues on my pointers/sizes/other integer variables. In this use, it solves many problems. My problem is coming when moving from 32bit dev mode to 64bit production mode. When I build the app in this mode, I'm getting the following linker warnings - 1>cachecontrol.obj : warning LNK4006: "bool __cdecl IntrinsicMultiplyUint64(unsigned __int64 const &,unsigned __int64 const &,unsigned __int64 *)" (?IntrinsicMultiplyUint64@@YA_NAEB_K0PEA_K@Z) already defined in ImageInRamCache.obj; second definition ignored 1>cachecontrol.obj : warning LNK4006: "bool __cdecl IntrinsicMultiplyInt64(__int64 const &,__int64 const &,__int64 *)" (?IntrinsicMultiplyInt64@@YA_NAEB_J0PEA_J@Z) already defined in ImageInRamCache.obj; second definition ignored While I understand I can ignore the error, I would like either (a) prevent the warning from occurring or (b) make it disappear so that my QA department doesn't flag it as a problem. And after spending some time researching it, I cannot find a way to do either.

    Read the article

  • Variable not accessible within an if statment

    - by Chris
    I have a variable which holds a score for a game. My variable is accessible and correct outside of an if statement but not inside as shown below score is declared at the top of the main.cpp and calculated in the display function which also contains the code below cout << score << endl; //works if(!justFinished){ cout << score << endl; // doesn't work prints a large negative number endTime = time(NULL); ifstream highscoreFile; highscoreFile.open("highscores.txt"); if(highscoreFile.good()){ highscoreFile.close(); }else{ std::ofstream outfile ("highscores.txt"); cout << score << endl; outfile << score << std::endl; outfile.close(); } justFinished = true; } cout << score << endl;//works

    Read the article

  • Magento products will not show in category

    - by Aaron
    I've recently been tasked with the build and deployment of a large Ecommerce site. In the past we've had to use the clients legacy X-cart installation for redevelopment (too far integrated with their existing work flow). We'd heard good things about Magento, so I've set up a test install to get to grips with it. After a couple of initial issues, there is a live development site which displays categories on the default theme. The problem we've hit now is that products don't display..! After a lot more in-depth research into this, all I've been able to discover is that quite a number of developers endorse using other solutions entirely, with the other 50% saying after the steep learning curve the platform is as wonderful as we'd initially been led to believe. Now, my test category is showing, so I know this is configured properly. I've set up three test products and associated them with this (all done following the Magento user guide), checked double checked and thrice checked the products are enabled and visible individually, yet still the front end says the category has no products in it. I've cleared the cache repeatedly, reset everything possible many times in index management - no products show up. I have to make a call tomorrow morning on whether we're going ahead with Magento. If I can't even get it to show products I'm going to have to go with something with a more established track record and more community support available. Can anybody advise what could possibly be wrong here?

    Read the article

< Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >