Search Results

Search found 91480 results on 3660 pages for 'large data in sharepoint list'.

Page 318/3660 | < Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >

  • Declaring two large 2d arrays gives segmentation fault.

    - by pfdevil
    Hello, i'm trying to declare and allocate memory for two 2d-arrays. However when trying to assign values to itemFeatureQ[39][16816] I get a segmentation vault. I can't understand it since I have 2GB of RAM and only using 19MB on the heap. Here is the code; double** reserveMemory(int rows, int columns) { double **array; int i; array = (double**) malloc(rows * sizeof(double *)); if(array == NULL) { fprintf(stderr, "out of memory\n"); return NULL; } for(i = 0; i < rows; i++) { array[i] = (double*) malloc(columns * sizeof(double *)); if(array == NULL) { fprintf(stderr, "out of memory\n"); return NULL; } } return array; } void populateUserFeatureP(double **userFeatureP) { int x,y; for(x = 0; x < CUSTOMERS; x++) { for(y = 0; y < FEATURES; y++) { userFeatureP[x][y] = 0; } } } void populateItemFeatureQ(double **itemFeatureQ) { int x,y; for(x = 0; x < FEATURES; x++) { for(y = 0; y < MOVIES; y++) { printf("(%d,%d)\n", x, y); itemFeatureQ[x][y] = 0; } } } int main(int argc, char *argv[]){ double **userFeatureP = reserveMemory(480189, 40); double **itemFeatureQ = reserveMemory(40, 17770); populateItemFeatureQ(itemFeatureQ); populateUserFeatureP(userFeatureP); return 0; }

    Read the article

  • SubSonic 3 issue creating List<>

    - by Brian Cochran
    I have an application that requires we use distinct user connection strings per user. We are trying to upgrade from SubSonic 2.x to 3.0. I'm running into issues with trying to create a List< of objects. When I try to create a List like this: List<table_name> oList = table_name.All().Where(tn => tn.table_id == TableId).ToList(); I get the error "Connection string 'ConnectionStringName' does not exist." So, I try to create the List< like this: List<table_name> oList = table_name.All(sConnectionString, "System.Data.SqlClient").Where(tn => tn.table_id == TableId).ToList(); I get the error "The name 'table_name' does not exist in the current context." I'm using SQL Server, and the sConnectionString is definitely verified to be a good connection string, and the table_name is a table in the database. What am I doing wrong?

    Read the article

  • Linking to a Large address aware DLL.

    - by Canopus
    Suppose I have a DLL which is built with LARGEADDRESSAWARE linker flag set. Now I have an application dynamically linking to this DLL. Does this make my application LARGEADDRESSAWARE? If not then, does it make sense to have this flag set for any DLL?

    Read the article

  • Selecting a sequence of elements from the IList

    - by KhanS
    I have a IList. where the object PersonDetails consists of the persons name, address and phone number. The list consists of more than 1000 person details. I would like to display 50 PersonDetails per page. Is there a way to select only 50 elements from the list, and return them. For example. myList.select(1,50) myList.select(51, 100) I am able to select only first 50 by using. myList.Take(50); The entire list is at the wcf service, and i would like to get only fifty elements at a time.

    Read the article

  • Efficient retrieval of lists over WebServices

    - by Chris
    I have a WCF WebService that uses LINQ and EF to connect to an SQL database. I have an ASP.NET MVC front end that collects its data from the webservice. It currently has functions such as List<Customer> GetCustomers(); As the number of customers increases massively the amount of data being passed increases also reducing efficiency. What is the best way to "page data" across WebServices etc. My current idea is to implement a crude paging system such as List<Customer> GetCustomers(int start, int length); This, however, means I would have to replicate such code for all functions returning List types. It is unfortunate that I cannot use LINQ as it would be much nicer. Does anyone have any advice or ideas of patterns to implement that would be "as nice as possible" Thanks

    Read the article

  • Displaying a list of items vertically in a table instead of horizonally

    - by MichaelMM
    I have a list of items sorted alphabetically: list = [a,b,c,d,e,f,g,h,i,j] I'm able to output the list in an html table horizonally like so: | a , b , c , d | | e , f , g , h  | | i  , j  ,    ,     | What's the algorithm to create the table vertically like this: | a , d , g , j | | b , e , h ,   | | c , f  ,  i  ,  | I'm using python, but your answer can be in any language or even pseudocode. Thanks

    Read the article

  • Sending large XML from Silverlight to SVC (WCF)

    - by alexbf
    Hi! I want to send a big XML string to a WCF SVC service from Silverlight. It looks like anything under about 50k is sent correctly but if I try to send something over that limit, my request reaches the server (BeginRequest is called) but never reaches my SVC. I get the classic "NotFound" exception. Any idea on how to raise that limit? If I can't raise it? What are my other options? Thanks, Alex

    Read the article

  • how can i execute large mysql queries fast

    - by testkhan
    I have 4 mysql tables and have a single query with JOIN on multiple tables and I am requesting it via jquery ajax, but it takes to much long time from about 1-3 minutes while I want to execute them on average 2-5 seconds. is there any special way to execute the quries fast

    Read the article

  • sql select from a large number of IDs

    - by Claudiu
    I have a table, Foo. I run a query on Foo to get the ids from a subset of Foo. I then want to run a more complicated set of queries, but only on those IDs. Is there an efficient way to do this? The best I can think of is creating a query such as: SELECT ... --complicated stuff WHERE ... --more stuff AND id IN (1, 2, 3, 9, 413, 4324, ..., 939393) That is, I construct a huge "IN" clause. Is this efficient? Is there a more efficient way of doing this, or is the only way to JOIN with the inital query that gets the IDs? If it helps, I'm using SQLObject to connect to a PostgreSQL database, and I have access to the cursor that executed the query to get all the IDs.

    Read the article

  • Which kind of changes can't I do with lightweight migration in Core Data?

    - by dontWatchMyProfile
    I recently tried a lot of different stuff with lightweight migration. These all work: 1) Rename attributes (with renaming identifier specified) 2) Add attributes 3) Add new entity + new attribute + inverse relationship to an already existing entity 4) remove existing entity + relationships to that entity = It almost looks like just about anything can be handled with LM. Did I miss something? In which cases am I getting into trouble and need an some more complex approach?

    Read the article

  • walking list in KDB/kernel debugger

    - by user291849
    I need to walk a link list in the kernel debugger. How can I determine the head pointer and walk the list? I have a listing and can find the address and location in the code where I check to see if I have a head, so I know the specific code location and address. But not sure how to determine the pointer or how to determine the next element and pointer on the list.

    Read the article

  • Empty R environment becomes large file when saved

    - by user1052019
    I'm getting behaviour I don't understand when saving environments. The code below demonstrates the problem. I would have expected the two files (far-too-big.RData, and right-size.RData) to be the same size, and also very small because the environments they contain are empty. In fact, far-too-big.R ends up the same size as bigfile.RData. I get the same results using 2.14.1 and 2.15.2, both on WinXP 5.1 SP3. Can anyone explain why this is happening? Thanks. a <- matrix(runif(1000000, 0, 1), ncol=1000) save(a, file="c:/temp/bigfile.RData") test <- function() { load("c:/temp/bigfile.RData") test <- new.env() save(test, file="c:/temp/far-too-big.RData") test1 <- new.env(parent=globalenv()) save(test1, file="c:/temp/right-size.RData") } test()

    Read the article

  • Read a large result set in chunks from mysql

    - by ripper234
    I am trying to read a huge result set from mysql. Reading them in a straight-forward manner didn't work, as mysql tries to return all results together, which times out. I found the following piece of code which tells mysql to read the results back one at a time: stmt = conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY, java.sql.ResultSet.CONCUR_READ_ONLY); stmt.setFetchSize(Integer.MIN_VALUE); Can I read a chunk at a time instead of one by one? I've tried setting fetch size to a different value, but it doesn't work.

    Read the article

  • Prevent RegEx Hang on Large Matches...

    - by developerjay
    This is a great regular expression for dates... However it hangs indefinitely on this one page I tried... I wanted to try this page ( http://pleac.sourceforge.net/pleac%5Fpython/datesandtimes.html ) for the fact that it does have lots of dates on it and I want to grab all of them. I don't understand why it is hanging when it doesn't on other pages... Why is my regexp hanging and/or how could I clean it up to make it better/efficient ? Python Code: monthnames = "(?:Jan\w*|Feb\w*|Mar\w*|Apr\w*|May|Jun\w?|Jul\w?|Aug\w*|Sep\w*|Oct\w*|Nov(?:ember)?|Dec\w*)" pattern1 = re.compile(r"(\d{1,4}[\/\\\-]+\d{1,2}[\/\\\-]+\d{2,4})") pattern4 = re.compile(r"(?:[\d]*[\,\.\ \-]+)*%s(?:[\,\.\ \-]+[\d]+[stndrh]*)+[:\d]*[\ ]?(PM)?(AM)?([\ \-\+\d]{4,7}|[UTCESTGMT\ ]{2,4})*"%monthnames, re.I) patterns = [pattern4, pattern1] for pattern in patterns: print re.findall(pattern, s) btw... when i say im trying it against this site.. I'm trying it against the webpage source.

    Read the article

  • A question about comparing List<T>

    - by Varyanica
    i have two lists: List<comparerobj> list_c = new List<comparerobj>(); List<comparerobj> list_b = new List<comparerobj>(); i'm filling lists somehow then i'm trying to find elements in list_b which list_c doesnt contain: foreach (comparerobj b in list_b) { bool lc = !list_c.Contains(b); if (lc != true) { data.Add(b); } } but for any b i'm getting that lc = true. what am i doing wrong?

    Read the article

  • Easiest way to merge two List<T>s

    - by Chris McCall
    I've got two List<Name>s: public class Name { public string NameText {get;set;} public Gender Gender { get; set; } } public class Gender { public decimal MaleFrequency { get; set; } public decimal MaleCumulativeFrequency { get; set; } public decimal FemaleCumulativeFrequency { get; set; } public decimal FemaleFrequency { get; set; } } If the NameText property matches, I'd like to take the FemaleFrequency and FemaleCumulativeFrequency from the list of female Names and the MaleFrequency and MaleCumulativeFrequency values from the list of male Names and create one list of Names with all four properties populated. What's the easiest way to go about this in C# using .Net 3.5?

    Read the article

  • Increase the length of Xcode's "recent project" list?

    - by Bogatyr
    I switch in Xcode between working on a lot of different projects frequently (some I'm actively working on, some are old projects where I'm looking up code I want to re-use or quote in SO answers :)), so that part of my "working set" of projects invariably ends up falling off the recent project list. I do use finder tabs for the full working set of current project folders, but I really like the fast switching available using the recent projects list. Is there a way to increase the length of this list so that I can see more recently opened projects?

    Read the article

  • VS2008 is very slow on a specific large C++ solution

    - by VioletRose
    I have a solution with 21 C++ projects and 1 VB.NET project. The IDE responds very slowly when I simply move the carret in a file or try to open the menu. The process seems to take 50% of CPU for each movement. It only happens with this solution and only on my machine. The solution has total of 2380 source and header files, of which 1280 are header files. I tried to remove all connection to the source control (Perforce) but it didn't help. Also, I have Visual Assist installed but even after removing it (uninstall), the same behavior continued. Any idea?

    Read the article

  • How to store and synchronize a big list of strings

    - by Joel
    I have a large database table in SQLExpress on Windows, with a particular field of interest 'code'. I have an Apache web server with MySQL on Linux. The web application on the Linux box needs access to the list of all codes. The only thing it will use the list for is checking for the existence of a given code. Having the Linux server call out to the Windows server is impractical as the Windows server is behind a NAT'ed office internet connection, and it may not always be accessible. I have set it so the Windows server will push the list of codes to the web server by means of a simple HTTP POST request. However, at this point I have not implemented the storage of the codes on the Linux box. Should I store them in a MySQL table with a single field 'code'? Then I get fast indexed lookups O(1), however I think synchronization will be an issue - given an updated list of codes, pushed from the Windows box, how would I optimally synchronize the list with the database? TRUNCATE, followed by INSERT? Should I instead store them in a flat file? Then I have O(n) look up time rather than O(1). Additionally an extra constant-time overhead too, as I will be processing the file in Ruby. However, synchronization is easy - simply replace the file.

    Read the article

  • Group Objects by Common Key

    - by Marshmellow1328
    I have a list of Customers. Each customer has an address and some customers may actually have the same address. My ultimate goal is to group customers based on their address. I figure I could either put the customers in some sort of list-based structure and sort on the addresses, or I could drop the objects into some sort of map that allows multiple values per key. I will now make a pretty picture: List: A1 - C1, A1 - C2, A2 - C3, A3 - C4, A3 - C5 Map: A1 A2 A3 C1 C3 C4 C2 C5 Which option (or any others) do you see as the best solution? Are there any existing classes that will make development easier?

    Read the article

  • servlet ArrayList and HashMap problem witch result

    - by nonameplum
    Hi, I have that code List<Map<String, Object>> data = new ArrayList<Map<String, Object>>(); Map<String, Object> item = new HashMap<String, Object>(); data.clear(); item.clear(); int i = 0; while (i < 5){    item.put("id", i);    i++;    out.println("id: " + item.get("id"));    out.println("--------------------------");    data.add(item); } for(i=0 ; i<5 ; i++){    out.println("print data[" + i + "]" + data.get(i)); } Result of that is: id: 0 -------------------------- id: 1 -------------------------- id: 2 -------------------------- id: 3 -------------------------- id: 4 -------------------------- print data[0]{id=4} print data[1]{id=4} print data[2]{id=4} print data[3]{id=4} print data[4]{id=4} Why only last element is stored?

    Read the article

< Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >