Search Results

Search found 22961 results on 919 pages for 'memory management'.

Page 697/919 | < Previous Page | 693 694 695 696 697 698 699 700 701 702 703 704  | Next Page >

  • Branching and Merging Strategies

    - by benPearce
    I have been tasked with coming up with a strategy for branching, merging and releasing over the next 6 months. The complication comes from the fact the we will be running multiple projects all with different code changes and different release dates but approximately the same development start dates. At present we are using VSS for code management, but are aware that it will probably cause some issues and will be migrating to TFS before new development starts. What strategies should I be employing and what things should I be considering before setting a plan down? Sorry if this is vague, feel free to ask questions and I will update with more information if required.

    Read the article

  • ASP.NET MVC as a service host for SOA like architecture

    - by Delucia
    I'm creating a distributed application that includes a lot of services and I'm looking for the technology that allows me to create and manage a lot of services easily. I know managing and deploying windows services is not fun. I'm thinking of using ASP.NET MVC as service host of my services where each controller action becomes essentially a service and I can communicate with a service via simple HTTP request and responses and not have to deal with complexity if i use something like WCF. Services need to be isolated and asp.net requests are isolated as far as i know, i.e. if a request throws an exception it will not effect other running requests. But I still have questions about the management of the services. How will it be possible to see which services are running, stopping and resuming services. Also ASP.NET MVC are passive, i.e. they only do something upon a request but what if i want to service that initiates work by itself?

    Read the article

  • Why can final object be modified?

    - by Matt McCormick
    I came across the following code in a code base I am working on: public final class ConfigurationService { private static final ConfigurationService INSTANCE = new ConfigurationService(); private List providers; private ConfigurationService() { providers = new ArrayList(); } public static void addProvider(ConfigurationProvider provider) { INSTANCE.providers.add(provider); } ... INSTANCE is declared as final. Why can objects be added to INSTANCE? Shouldn't that invalidate the use of final. (It doesn't). I'm assuming the answer has to do something with pointers and memory but would like to know for sure.

    Read the article

  • Advanced Java book in the lines of CLR via c# or C# in Depth?

    - by devoured elysium
    I want to learn about how things work in depth in Java. Coming from a c# background, there were a couple of very good books that go really deep in c# (C# in depth, CLR via c#, just to name the most popular). Is there anything like that in Java? I searched it up on amazon but nothing seemed to go that deep in Java as the two above go in c#. I don't want to know more about specific classes, or how to use this library or that other library, I want to learn how the objects are created on memory, how they get created on the stack, heap, etc. A more fundamental knowledge, let's say. I've read some chapters of Effective Java and The Java Programming Language but they don't seem to go so deep as I'd want them to go. Maybe there are other people that know both c# and Java that have read any of the referred books and know any that might be useful? Thanks

    Read the article

  • managing library dependencies with Boost.Build and C++

    - by user931794
    I want to develop a project which can be built on a bunch of different platforms. The project code will be in C++, what's the the best way to manage libraries? I plan on using bjam as the build system because I'm going to be depending on Boost and their unit testing framework as well. The two dependent libraries are Boost itself and FLTK. The possibilities that come to mind for library management are: include build artifacts (binaries) and headers for all supported platforms in-tree include complete source for all dependent libraries in-tree, and somehow script them as dependencies A combination of 1 and 2, like node.js does with v8 inform the user that they need to build the libraries themselves and then have them on the PATH or in some special directory, like libcurl does with its dependencies What is the best approach here? The project will probably not grow beyond a few thousand lines over the next six months, but I want to make the right choice here so that I don't have to come back and switch everything around later.

    Read the article

  • Make compiler copy characters using movsd

    - by Suma
    I would like to copy a relatively short sequence of memory (less than 1 KB, typically 2-200 bytes) in a time critical function. The best code for this on CPU side seems to be rep movsd. However I somehow cannot make my compiler to generate this code. I hoped (and I vaguely remember seeing so) using memcpy would do this using compiler built-in instrinsic, but based on disassembly and debugging it seems compiler is using call to memcpy/memmove library implementation instead. I also hoped the compiler might be smart enough to recognize following loop and use rep movsd on its own, but it seems it does not. char *dst; const char *src; // ... for (int r=size; --r>=0; ) *dst++ = *src++; Is there some way to make the Visual Studio compiler to generate rep movsd sequence other than using inline assembly?

    Read the article

  • C++ Static array vs. Dynamic array?

    - by user69514
    What is the difference between a static array and a dynamic array in C++? I have to do an assignment for my class and it says not to use static arrays, only dynamic arrays. I've looked in the book and online, but I don't seem to understand. I thought static was created at compile time and dynamic at runtime, but I might be mistaken this with memory allocation. Can you explain to me the difference between static array and dynamic array in C++? Thnaks.

    Read the article

  • BigDecimal precision not persisted with javax.persistence annotations

    - by dkaczynski
    I am using the javax.persistence API and Hibernate to create annotations and persist entities and their attributes in an Oracle 11g Express database. I have the following attribute in an entity: @Column(precision = 12, scale = 9) private BigDecimal weightedScore; The goal is to persist a decimal value with a maximum of 12 digits and a maximum of 9 of those digits to the right of the decimal place. After calculating the weightedScore, the result is 0.1234, but once I commit the entity with the Oracle database, the value displays as 0.12. I can see this by either by using an EntityManager object to query the entry or by viewing it directly in the Oracle Application Express (Apex) interface in a web browser. How should I annotate my BigDecimal attribute so that the precision is persisted correctly? Note: We use an in-memory HSQL database to run our unit tests, and it does not experience the issue with the lack of precision, with or without the @Column annotation.

    Read the article

  • How to serialize a collection of base type and see the concrete types in easy to read XML

    - by Jason Coyne
    I have a List which is populated with objects of various concrete types which subclass BaseType I am using the WCF DataContractSerializer <Children> <BaseType xmlns:d3p1="http://schemas.datacontract.org/2004/07/Tasks" i:type="d3p1:ConcreteTypeA"></BaseType> <BaseType xmlns:d3p1="http://schemas.datacontract.org/2004/07/Tasks" i:type="d3p1:ConcreteTypeB"></BaseType> </Children> Is there any way to get this to generate <Children> <ConcreteTypeA/> <ConcreteTypeB/> </Children> ? The real goal is to let users generate some XML to load into memory, and the users are of a skill level that asking them for the original XML is not going to be successful.

    Read the article

  • SQL CE not loading from network share

    - by David Veeneman
    I installed VS 2010 RC yesterday, and suddenly, SQL Server CE isn't loading files from a network share. In projects compiled with VS 2008, if I try to open a SQL CE file located on a network share, I get an error that reads like this: Internal error: Cannot open the shared memory region. If I try to create a data connection in VS 2010 to a SQL CE file on a network share, I get this error: SQL Server Compact does not support opening database files on a network share. Can anyone shed any light on what's going on? Thanks.

    Read the article

  • How to prevent a globally overridden "new" operator from being linked in from external library

    - by mprudhom
    In our iPhone XCode 3.2.1 project, we're linking in 2 external static C++ libraries, libBlue.a and libGreen.a. libBlue.a globally overrides the "new" operator for it's own memory management. However, when we build our project, libGreen.a winds up using libBlue's new operator, which results in a crash (presumably because libBlue.a is making assumptions about the kinds of structures being allocated). Both libBlue.a and libGreen.a are provided by 3rd parties, so we can't change any of their source code or build options. When we remove libBlue.a from the project, libGreen.a doesn't have any issues. However, no amount of shuffling the linking order of the libraries seems to fix the problem, nor does any experimentation with the various linking flags. Is there some way to tell XCode to tell the linker to "have libGreen's use of the new operator use the standard C++ new operator rather than the one redefined by libBlue"?

    Read the article

  • removing duplicate strings from a massive array in java efficiently?

    - by Preator Darmatheon
    I'm considering the best possible way to remove duplicates from an (Unsorted) array of strings - the array contains millions or tens of millions of stringz..The array is already prepopulated so the optimization goal is only on removing dups and not preventing dups from initially populating!! I was thinking along the lines of doing a sort and then binary search to get a log(n) search instead of n (linear) search. This would give me nlogn + n searches which althout is better than an unsorted (n^2) search = but this still seems slow. (Was also considering along the lines of hashing but not sure about the throughput) Please help! Looking for an efficient solution that addresses both speed and memory since there are millions of strings involved without using Collections API!

    Read the article

  • On Windows XP, programmatically set Pagefile to "No Paging File" on single c: drive

    - by NBPC77
    I'm trying to write a C#/.NET application that optimizes the hard drives for our XP workstations Set pagefile to "No paging file" Reboot Run a defrag utility to optimize the data and apps Create a contiguous page file Reboot, run pagedefrag from Sysinternals I'm really struggling with #1. I delete the following key: SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management\PagingFiles Upon reboot, the System Control Panel shows "No page file", but c:\pagefile.sys still exists and its in use by the SYSTEM process so I can't delete it and I can't optimize HD. I tried using PendingFileRenamingOperations and that bombs out too. I tried using WMI: Win32_PageFileSetting, but that only lets you set sizes (not zero--defaults to 2MB). Of course, if I do the manual steps outlined above, it works. I think I need an API call to make this happen.

    Read the article

  • Sharepoint Foundation 2010 development environment installation problems

    - by Robert Koritnik
    I'm having problems installing development machine for Sharepoint (Foundation) 2010. This is what I did so far on the same machine: Installed a clean Windows 7 x64 with 4GB of RAM without being part of any domain. Just a simple standalone machine. Enabled IIS related features as described here except IIS6 related ones (two of them) Installed SQL Server 2008 R2 Development Edition (DB Engine and Writer being enabled but not SQL Agent) Installed Visual Studio 2010 Premium Started installing Sharepoint Foundation 2010 with first extracting files, changing config to enable Windows 7 installation and then installed it as Server Farm (then Complete) to avoid installing SQL Express. Created a separate SPF_CONFIG local user with Logon on as a service right. Opened SPF Management Shell and run New-SPConfigurationDatabase so I am able to use a non-domain username (SPF_CONFIG that I created in the previous step) But all I get is this: The outcome after this error is: Database Sharepoint2010Config is created User SPF_CONFIG is added to SQL Server and attached to this newly created database as dbowner Checking SQL server security logins this user has following rights: dbcreator securityadmin public

    Read the article

  • How do laziness and I/O work together in Haskell?

    - by Bill
    I'm trying to get a deeper understanding of laziness in Haskell. I was imagining the following snippet today: data Image = Image { name :: String, pixels :: String } image :: String -> IO Image image path = Image path <$> readFile path The appeal here is that I could simply create an Image instance and pass it around; if I need the image data it would be read lazily - if not, the time and memory cost of reading the file would be avoided: main = do image <- image "file" putStrLn $ length $ pixels image But is that how it actually works? How is laziness compatible with IO? Will readFile be called regardless of whether I access pixels image or will the runtime leave that thunk unevaluated if I never refer to it? If the image is indeed read lazily, then isn't it possible I/O actions could occur out of order? For example, what if immediately after calling image I delete the file? Now the putStrLn call will find nothing when it tries to read.

    Read the article

  • How to find all Classes implemeting IDisposable?

    - by apoorv020
    I am working on a large project, and one of my tasks is to remove possible memory leaks. In my code, I have noticed several IDisposable items not being disposed of, and have fixed that. However, that leads me to a more basic question, how do I find all classes used in my project that implement IDisposable? (Not custom created classes by normal Library classes). I have already found one less-than-obvious class that implements IDisposable ( DataTable implements MarshalByValueComponent, which inherits IDisposable). Right now, I am manually checking any suspected classes by using MSDN, but isn't there some way through which I can automate this process?

    Read the article

  • What are the pros and cons of using an in memeory DB rather than a ThreadLocal

    - by Pangea
    we have been using ThreadLocal so far to carry some data so as to not clutter the API. However below are some of issues of using thread local that which I dont like 1) over the years the data items being carried in thread local has increased 2) Since we started using threads (for some light weight processing), we have also migrating these data to the threads in the pool and copying them back again I am thinking of using an in memory DB for these (we doesnt want to add this to the API). I wondering if this approach is good. What r the pros and cons. thx in advance.

    Read the article

  • Copy Structure To Another Program

    - by Steven
    Long story, long: I am adding a web interface (ASPX.NET: VB) to a data acquisition system developed with LabVIEW which outputs raw data files. These raw data files are the binary representation of a LabVIEW cluster (essentially a structure). LabVIEW provides functions to instantiate a class or structure or call a method defined in a .NET DLL file. I plan to create a DLL file containing a structure definition and a class with methods to transfer the structure. When the webpage requests data, it would call a LabVIEW executable with a filename parameter. The LabVIEW code would instantiate the structure, populate the structure from the data file, then call the method to transfer the data back to the website. Long story, short: How do you recommend I transfer (copy) an instance of a structure from one .NET program to a VB.NET program? Ideas considered: sockets, temp file, xml file, config file, web services, CSV, some type of serialization, shared memory

    Read the article

  • A cross between std::multimap and std::vector?

    - by Milan Babuškov
    I'm looking for a STL container that works like std::multimap, but has constant access time to random n-th element. I need this because I have such structure in memory that is std::multimap for many reasons, but items stored in it have to be presented to the user in a listbox. Since amount of data is huge, I'm using list box with virtual items (i.e. list control polls for value at line X). As a workaround I'm currently using additional std::vector to store "indexes" into std::map, and I fill it like this: std::vector<MMap::data_type&> vec; for (MMap::iterator it = mmap.begin(); it != mmap.end(); ++it) vec.push_back((*it).second); But this is not very elegant solution. Is there some such containter?

    Read the article

  • Processing potentially large STDIN data, more than once

    - by d11wtq
    I'd like to provide an accessor on a class that provides an NSInputStream for STDIN, which may be several hundred megabytes (or gigabytes, though unlikely, perhaps) of data. When I caller gets this NSInputStream it should be able to read from it without worrying about exhausting the data it contains. In other words, another block of code may request the NSInputStream and will expect to be able to read from it. Without first copying all of the data into an NSData object which (I assume) would cause memory exhaustion, what are my options for handling this? The returned NSInputStream does not have to be the same instance, it simply needs to provide the same data. The best I can come up with right now is to copy STDIN to a temporary file and then return NSInputStream instances using that file. Is this pretty much the only way to handle it? Is there anything I should be cautious of if I go the temporary file route?

    Read the article

  • Inserting Large volume of data in SQL Server 2005

    - by Manjoor
    We have a application (written in c#) to store live stock market price in the database (SQL Server 2005). It insert about 1 Million record in a single day. Now we are adding some more segment of market into it and the no of records would be double (2 Millions/day). Currently the average record insertion per second is about 50, maximum is 450 and minimum is 0. To check certain conditions i have used service broker (asynchronous trigger) on my price table. It is running fine at this time(about 35% CPU utilization). Now i am planning to create a in memory dataset of current stock price. we would like to do some simple calculations. I want to know different views of members on this. Please provide your way of dealing with such situation.

    Read the article

  • LNK1106 with big binary resource

    - by E Dominique
    I have a rather huge .dat-file (896MB) included as a BIN resource in my project. Now I get a LNK1106 link error ("fatal error LNK1106: invalid file or disk full: cannot seek to 0x382A3920".) I use Visual Studio 2005 under Windows XP, and have tried on a 4GB RAM machine with high Virtual Memory settings and lots of disk space. I have tried a number of different optimization flags, but to no avail. Does anyone have a clue? EDIT: I have narrowed it down to a specific size of the compiled resource. If the .res file is 544078588 bytes (about 518.9MB) or larger, the error occurs. If it is smaller it works just fine. Still no solution, though...

    Read the article

  • The speed of .NET in numerical computing

    - by Yin Zhu
    In my experience, .net is 2 to 3 times slower than native code. (I implemented L-BFGS for multivariate optimization). I have traced the ads on stackoverflow to http://www.centerspace.net/products/ the speed is really amazing, the speed is close to native code. How can they do that? They said that: Q. Is NMath "pure" .NET? A. The answer depends somewhat on your definition of "pure .NET". NMath is written in C#, plus a small Managed C++ layer. For better performance of basic linear algebra operations, however, NMath does rely on the native Intel Math Kernel Library (included with NMath). But there are no COM components, no DLLs--just .NET assemblies. Also, all memory allocated in the Managed C++ layer and used by native code is allocated from the managed heap. Can someone explain more to me? Thanks!

    Read the article

  • How do I switch to a SQL Server Server Database that will exist after another command?

    - by Jason Young
    I can't get this script to run, because SQL management studio 2008 says the table "NewName" does not exist. However, the script's purpose is to rename an existing database, so that it does exist when it gets to that line. Ideas? Use Master; ALTER DATABASE OldName SET SINGLE_USER WITH NO_WAIT; ALTER DATABASE OldName MODIFY NAME = NewName; ALTER DATABASE NewName SET MULTI_USER; Use NewName; --THIS LINE FAILS BEFORE THE SCRIPT EVEN RUNS!

    Read the article

  • weird data-grid-view/crystal-reports behaviour c# winforms

    - by jello
    I have a winforms project which I keep in different versions, each version having its own project folder. All these projects use the same database file, which is copied in each project folder too. So if I run a project, let's say 0.34, and then I try to run 0.35, all the database functions don't work, unless I detach the database in SQL server management studio express. So all the database functions don't work, except the data grid view and/or crystal reports. But then, if I detach the database, and I run any version, all the database functions work, except the data grid view and/or crystal reports. So to recap, when the database functions work (like select), crystal reports doesn't work. But when the database functions don't work because the database is not detached, crystal reports works. weird. any ideas?

    Read the article

< Previous Page | 693 694 695 696 697 698 699 700 701 702 703 704  | Next Page >