Search Results

Search found 91480 results on 3660 pages for 'large data in sharepoint list'.

Page 258/3660 | < Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >

  • Making large toolbars like the iPod app

    - by andybee
    I am trying to create a toolbar programatically (rather than via IB) very similar to the toolbar featured in the iPhone app. Currently I've been experimenting with the UIToolbar class, but I'm not sure how (and if?) you can make the toolbar buttons centrally aligned and large like that in the iPod app. Additionally, regardless of size, the gradient/reflection artwork never correctly respects the size and is stuck as if the object is the default smaller size. If this cannot be done with a standard UIToolbar, I guess I need to create my own view. In this case, can the reflection/gradient be created programmatically or will it require some clever alpha tranparency Photoshopped artwork?

    Read the article

  • Issues while downloading document from Sharepoint using JAVA

    - by Deepak Singh Rawat
    I am trying to download a file from Sharepoint 2007 sp2 document library using GetItem method of the Copy webservice. I am facing the following issues : In the local instance ( Windows Vista ) I can save only 10.5 Kb of any file. The webservice is returning only 10.5 Kb of data for any file. On the production server, I am able to List the documents using some credentials but when I am trying to download a document using the same credentials I get a 401 : Unauthorized message. I can download the document using the Sharepoint website successfully.

    Read the article

  • Request header is too large

    - by stck777
    I found serveral IllegalStateException Exception in the logs: [#|2009-01-28T14:10:16.050+0100|SEVERE|sun-appserver2.1|javax.enterprise.system.container.web|_ThreadID=26;_ThreadName=httpSSLWorkerThread-80-53;_RequestID=871b8812-7bc5-4ed7-85f1-ea48f760b51e;|WEB0777: Unblocking keep-alive exception java.lang.IllegalStateException: PWC4662: Request header is too large at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:740) at org.apache.coyote.http11.InternalInputBuffer.parseHeader(InternalInputBuffer.java:657) at org.apache.coyote.http11.InternalInputBuffer.parseHeaders(InternalInputBuffer.java:543) at com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.parseRequest(DefaultProcessorTask.java:712) at com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.doProcess(DefaultProcessorTask.java:577) at com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.process(DefaultProcessorTask.java:831) at com.sun.enterprise.web.connector.grizzly.DefaultReadTask.executeProcessorTask(DefaultReadTask.java:341) at com.sun.enterprise.web.connector.grizzly.DefaultReadTask.doTask(DefaultReadTask.java:263) at com.sun.enterprise.web.connector.grizzly.DefaultReadTask.doTask(DefaultReadTask.java:214) at com.sun.enterprise.web.portunif.PortUnificationPipeline$PUTask.doTask(PortUnificationPipeline.java:380) at com.sun.enterprise.web.connector.grizzly.TaskBase.run(TaskBase.java:265) at com.sun.enterprise.web.connector.grizzly.ssl.SSLWorkerThread.run(SSLWorkerThread.java:106) |#] Does anybody know configuration changes to fix this?

    Read the article

  • Help with Sharepoint (WSS 3.0) and JQuery

    - by Nicholas Selby
    Ok so the scenario is as follows: Have a sharepoint site setup on Microsoft Online, basically this site is a job booking system and is based on a custom sharepoint list. What I am trying to achieve is to extract List Items where the Invoiced Column is set to "No". Eventually I would like to post this to xero.com using their API as it accepts XML through the API Endpoints. Have tried using JQuery and JPoint but my limited programming skills are holding me back. Could anyone offer me some advice or point me in the right direction of someone that could help? am willing to pay someone if they can help me with getting this to work :)

    Read the article

  • design a large scale network for an organization

    - by Essam
    hello.i am so new to networking i want to design a large scale network for an organization with HQ and two branches. i want to use class A address for that.my questions are: if i am using the network address 30.0.0.0 for the whole organization how can it be different from another organization company or whatever which is using the same address in another country? now i have the three locations for this organization,so i need 5 subnets [one for the HQ,two for branch A and branch B , one for connecting A to HQ and one for connecting branch B with HQ since i will use central DHCP server at the HQ,is that(number of subnetting) right? is it advisable to use class A or class B for this organization it term of address that will be wasted (lets say it is a university with two branches in two different states)?! that is all your help is highly appropriated.

    Read the article

  • How to auto-increment reference number persistently when NSManagedObjects created in core-data.

    - by KayKay
    In my application i am using core-data to store information and saving these data to the server using web-connectivity i have to use MySql. Basically what i want to do is to keep track of number of NSManagedObject already created and Whenever i am adding new NSManagedObject, based on that counting it will assign the class a Int_value which will act as primary_key in MySql. For examaple, there are already 10 NSManagedobjects, and when i will add new one it will assign it "11" as primary_key. these value will have to be increasing because there is no deleting of NSManagedObject. From my approach its about static member in applicationDelegate whose initial value can be any integer but should be incremented by one(like auto-increment) everytime new NSManagedObject is created and also it should be persistent. I am not clear how to do this, please give me suggestions. Thanks in advance.

    Read the article

  • How can a large number of developers write software together without either a cumbersome process or

    - by Mark Robinson
    I work at a company with hundreds of people writing software for essentially the same product. The quality of the software has to be high because so many people depend on it (not least the developers themselves). Because of this every major issue has resulted in a new check - either automated or manual. As a result the process of delivering software is becoming ever more burdensome. So that requires more developers which... well you can see it is a vicious circle. We now have a problem with releasing software quickly - the lead time even to change one line of code for a very serious issue is at least one day. What techniques do you use to speed up the delivery of software in a large organization, while still maintaining software quality?

    Read the article

  • OutOfMemoryException, stack size is huge, large number of threads

    - by Captain Comic
    Hello, I was profiling my .net windows service. I was trying to discover OutOfMemoryException and discovered that my stack size is huge and is growing because the the number of threads keeps growing. Each thread gets 1024 KB on Windows x64 machine. Thus when my app has 754 threads the stack size would be 772 MB. The problem for me is that i don't know where these thread come from. Initially my app has a very limited number of threads and they keep growing with time. I have two suspicions - either these threads are created by WCF or by database connection. My application uses both WCF and datasets. Also I tried to profile my app in Ants do Trace i can see large number of System.ServiceModel.Channels.ClientReliableDuplexSessionChannel and this number is increasing with time. I can see thousands of these objects created. So what I want to know is who is creating threads (tools to discover, profilers) and if it is WCF who is creating these threads.

    Read the article

  • How to keep track of NSManagedObjects created in core-data persistently.

    - by KayKay
    In my application i am using core-data to store information and saving these data to the server using web-connectivity i have to use MySql. Basically what i want to do is to keep track of number of NSManagedObject already created and Whenever i am adding new NSManagedObject, based on that counting it will assign the class a Int_value which will act as primary_key in MySql. For examaple, there are already 10 NSManagedobjects, and when i will add new one it will assign it "11" as primary_key. these value will have to be increasing because there is no deleting of NSManagedObject. From my approach its about static member in applicationDelegate whose initial value can be any integer but should be incremented by one everytime new NSManagedObject is created and also it should be persistent. I am not clear how to do this, please give me suggestions. Thanks in advance.

    Read the article

  • Where would you document standardized complex data that is passed between many objects and methods?

    - by Eli
    Hi All, I often find myself with fairly complex data that represents something that my objects will be working on. For example, in a task-list app, several objects might work with an array of tasks, each of which has attributes, temporal expressions, sub tasks and sub sub tasks, etc. One object will collect data from web forms, standardize it into a format consumable by the class that will save them to the database, another object will pull them from the database, put them in the standard format and pass them to the display object, or the update object, etc. The data itself can become a fairly complex series of arrays and sub arrays, representing a 'task' or list of tasks. For example, the below might be one entry in a task list, in the format that is consumable by the various objects that will work on it. Normally, I just document this in a file somewhere with an example. However, I am thinking about the best way to add it to something like PHPDoc, or another standard doc system. Where would you document your consumable data formats that are for many or all of the objects / methods in your app? Array ( [Meta] => Array ( //etc. ) [Sched] => Array ( [SchedID] => 32 [OwnerID] => 2 [StatusID] => 1 [DateFirstTask] => 2011-02-28 [DateLastTask] => [MarginMonths] => 3 ) [TemporalExpressions] => Array ( [0] => Array ( [type] => dw [TemporalExpID] => 3 [ord] => 2 [day] => 6 [month] => 4 ) [1] => Array ( [type] => dm [TemporalExpID] => 32 [day] => 28 [month] => 2 ) ) [Task] => Array ( [SchedTaskID] => 32 [SchedID] => 32 [OwnerID] => 2 [UserID] => 5 [ClientID] => 9 [Title] => Close Prior Year [Body] => [DueTime] => ) [SubTasks] => Array ( [101] => Array ( [SchedSubTaskID] => 101 [ParentST] => [RootT] => 32 [UserID] => 2 [Title] => Review Profit and Loss by Class [Body] => [DueDiff] => 0 ) [102] => Array ( [SchedSubTaskID] => 102 [ParentST] => [RootT] => 32 [UserID] => 2 [Title] => Review Balance Sheet [Body] => [DueDiff] => 0 ) [103] => Array ( [SchedSubTaskID] => 103 [ParentST] => [RootT] => 32 [UserID] => 2 [Title] => Review Current Year for Prior Year Expenses to Accrue [Body] => Look at Journal Entries that are templates as well. [DueDiff] => 0 ) [104] => Array ( [SchedSubTaskID] => 104 [ParentST] => [RootT] => 32 [UserID] => 2 [Title] => Review Prior Year Membership from 11/1 - 12/31 to Accrue to Current Year [Body] => [DueDiff] => 0 ) [105] => Array ( [SchedSubTaskID] => 105 [ParentST] => [RootT] => 32 [UserID] => 2 [Title] => Enter Vacation Accrual [Body] => [DueDiff] => 0 ) [106] => Array ( [SchedSubTaskID] => 106 [ParentST] => 105 [RootT] => 32 [UserID] => 2 [Title] => Email Peter requesting Vacation Status of Employees at Year End [Body] => We need Employee Name, Rate and Days of Vacation left to use. We also need to know if the employee used any of the prior year's vacation. [DueDiff] => 43 ) [107] => Array ( [SchedSubTaskID] => 107 [ParentST] => [RootT] => 32 [UserID] => 2 [Title] => Grants Receivable at Year End [Body] => [DueDiff] => 0 ) [108] => Array ( [SchedSubTaskID] => 108 [ParentST] => 107 [RootT] => 32 [UserID] => 2 [Title] => Email Peter Requesting if there were and Grants Receivable at year end [Body] => [DueDiff] => 43 ) ) )

    Read the article

  • How to manipulate data in View using Asp.Net Mvc RC 2?

    - by Picflight
    I have a table [Users] with the following columns: INT SmallDateTime Bit Bit [UserId], [BirthDate], [Gender], [Active] Gender and Active are Bit that hold either 0 or 1. I am displaying this data in a table on my View. For the Gender I want to display 'Male' or 'Female', how and where do I manipulate the 1's and 0's? Is it done in the repository where I fetch the data or in the View? For the Active column I want to show a checkBox that will AutoPostBack on selection change and update the Active filed in the Database. How is this done without Ajax or jQuery?

    Read the article

  • mmap() for large file I/O?

    - by Boatzart
    I'm creating a utility in C++ to be run on Linux which can convert videos to a proprietary format. The video frames are very large (up to 16 megapixels), and we need to be able to seek directly to exact frame numbers, so our file format uses libz to compress each frame individually, and append the compressed data onto a file. Once all frames are finished being written, a journal which includes meta data for each frame (including their file offsets and sizes) is written to the end of the file. I'm currently using ifstream and ofstream to do the file i/o, but I am looking to optimize as much as possible. I've heard that mmap() can increase performance in a lot of cases, and I'm wondering if mine is one of them. Our files will be in the tens to hundreds of gigabytes, and although writing will always be done sequentially, random access reads should be done in constant time. Any thoughts as to whether I should investigate this further, and if so does anyone have any tips for things to look out for? Thanks!

    Read the article

  • Effective methods for reading and writing large files in C

    - by Bertholt Stutley Johnson
    I'm writing an application that deals with very large user-generated input files. The program will copy about 95 percent of the file, effectively duplicating it and switching a few words and values in the copy, and then appending the copy (in chunks) to the original file, such that each block (consisting of between 10 and 50 lines) in the original is followed by the copied and modified block, and then the next original block, and so on. The user-generated input conforms to a certain format, and it is highly unlikely that any line in the original file is longer than 100 characters long. Which would be the better approach? a) To use one file pointer and use variables that hold the current position of how much has been read and where to write to, seeking the file pointer back and forth to read and write; or b) To use multiple file pointers, one for reading and one for writing. I am mostly concerned with the efficiency of the program, as the input files will reach up to 25,000 lines, each about 50 characters long. Thanks!

    Read the article

  • How is 'processing credit card data' defined (PCI)?

    - by Chris
    If i have a web application and i receive credit card data transmitted via a POST request by a web browser over HTTPS and instantly open a socket (SSL) to a remote PCI compilant card processor to forward the data and wait for a response, am i allowed to do that? or is this receiving the data with my application and forwarding it already subject of "processing credit card data"? if i create an iframe that is displayed in a client browser to enter cc data and this iframe posts the data via HTTPS to remote card processor (directly!) is this already a case of processing credit card data? even if my application code 'doesnt touch' the entered data with any event handlers? i'm interested in the definition "credit card data processing". when does it start to be a cc data processing application? can somebody maybe point me to that section in PCI-DSS standard that clearly defines when you start to 'be a processing application'? Thanks,

    Read the article

  • transforming binary data using ssis and sql server 2008

    - by Rick
    Hello All - I have a task to import/transform and extract zipped binary files that contain both text data as well as embeded binary data. Within the data is data that is relational in nature and needs to be processed into a defined database structure. Currently I have a C# single threaded app that essentially grabs all the files from the directory (currently there is 13K files of varying sizes) and extracts the data on a single thread line by line inserts to the database. As you could imagine this is a very slow process and unacceptable. There are several different parsing routines used depending on the header record in the file. There are potentially upto a million rows per file when all the data is extracted to the row level of detail. Follow on task is to parse those rows into their appropriate tables based on is content. i.e. the textual content has to be parsed further into "buckets" of like data in the database. That about sums up the big picture. Now for the problem task list. How do i iterate through a packet of data using SSIS? In the app the file is decompressed and then is parsed using streams data type and byte arrays and is routed to the required parsing routine based on the header data of each packet. There is bit swapping involved as well. Should i wrap up the app code into a script task(s) and let it do the custom processing? The data is seperated by year and the sql server tables is partitioned by year as well. I need to be able to "catch" bad file data as well and process by hand most likely. Should i simply load the zipped file to sql as a blob and parse the file with T-SQL? Would that be multi threaded if done that way? Not sure how to do the parsing in tsql that is involved here. Which do you think would be faster? Potentially the data that is currently processed via files could come to us via a socket. Can SSIS collect that data in real time? How would i go about setting that up? Processing these new files from the directorys will become a daily task. I can manage the data once i get it to sql server. Getting it there in a timely fashion seems to be the long pole in the tent for me. I would appreciate any comments or suggestions from the group. Rick

    Read the article

  • remove duplicates from object array data java

    - by zahir hussain
    hi i want to know how to remove duplicates in object. for example cat c[] = new cat[10]; c[1].data = "ji"; c[2].data = "pi"; c[3].data = "ji"; c[4].data = "lp"; c[5].data = "ji"; c[6].data = "pi"; c[7].data = "jis"; c[8].data = "lp"; c[9].data = "js"; c[10].data = "psi"; i would like to remove the duplicates value from object array. thanks and advance

    Read the article

  • Best Zend Framework architecture for large reporting site?

    - by Andy
    I have a site of about 60 tabular report pages. Want to convert this to Zend. The report has two states: empty report and filled in with data report. Each report has its own set of input boxes and select drop downs to narrow down searches. You click on submit and it retrieves the data. Thats all each page does. Do I create 60 controllers with each one with default index action and getData action? All I have read online do not really describe how to architect a real site.

    Read the article

  • Large ResultSet on postgresql query

    - by tuler
    I'm running a query against a table in a postgresql database. The database is on a remote machine. The table has around 30 sub-tables using postgresql partitioning capability. The query will return a large result set, something around 1.8 million rows. In my code I use spring jdbc support, method JdbcTemplate.query, but my RowCallbackHandler is not being called. My best guess is that the postgresql jdbc driver (I use version 8.3-603.jdbc4) is accumulating the result in memory before calling my code. I thought the fetchSize configuration could control this, but I tried it and nothing changes. I did this as postgresql manual recomended. This query worked fine when I used Oracle XE. But I'm trying to migrate to postgresql because of the partitioning feature, which is not available in Oracle XE. My environment: Postgresql 8.3 Windows Server 2008 Enterprise 64-bit JRE 1.6 64-bit Spring 2.5.6 Postgresql JDBC Driver 8.3-603

    Read the article

  • Large number of Google Map Markers and IE6?

    - by Sivakanesh
    I'm working on an application that generates a large number of Google Map markers (2000 - 7000) via JSON. I'm also using MarkerCluster. It works quick on Chrome and FF but IE6 takes few minutes and just crashes the first time I try to zoom in. I'm not doing any more than just adding the markers to a map using JQuery & GMap API. So I looked at the following URL of the regular Google Map. http://maps.google.co.uk/maps?f=q&source=s_q&hl=en&q=hotel&sll=53.182996,-2.581787&sspn=1.494529,4.927368&ie=UTF8&split=1&rq=1&ev=p&hq=hotel&hnear=&ll=53.123702,-2.730103&spn=1.496594,4.927368&t=h&z=8 It shows a lot of tiny markers (~1000) and works fine on IE6. Do you have any ideas why this works and the markers added via the API struggles? Thanks

    Read the article

  • Speeding up jQuery empty() or replaceWith() Functions When Dealing with Large DOM Elements

    - by Levi Hackwith
    Let me start off by apologizing for not giving a code snippet. The project I'm working on is proprietary and I'm afraid I can't show exactly what I'm working on. However, I'll do my best to be descriptive. Here's a breakdown of what goes on in my application: User clicks a button Server retrieves a list of images in the form of a data-table Each row in the table contains 8 data-cells that in turn each contain one hyperlink Each request by the user can contain up to 50 rows (I can change this number if need be) That means the table contains upwards of 800 individual DOM elements My analysis shows that jQuery("#dataTable").empty() and jQuery("#dataTable).replaceWith(tableCloneObject) take up 97% of my overall processing time and take on average 4 - 6 seconds to complete. I'm looking for a way to speed up either of the above mentioned jQuery functions when dealing with massive DOM elements that need to be removed / replaced. I hope my explanation helps.

    Read the article

  • Large File Download - Connection With Server Reset

    - by daveywc
    I have an asp.net website that allows the user to download largish files - 30mb to about 60mb. Sometimes the download works fine but often it fails at some varying point before the download finishes with the message saying that the connection with the server was reset. Originally I was simply using Server.TransmitFile but after reading up a bit I am now using the code posted below. I am also setting the Server.ScriptTimeout value to 3600 in the Page_Init event. private void DownloadFile(string fname, bool forceDownload) { string path = MapPath(fname); string name = Path.GetFileName(path); string ext = Path.GetExtension(path); string type = ""; // set known types based on file extension if (ext != null) { switch (ext.ToLower()) { case ".mp3": type = "audio/mpeg"; break; case ".htm": case ".html": type = "text/HTML"; break; case ".txt": type = "text/plain"; break; case ".doc": case ".rtf": type = "Application/msword"; break; } } if (forceDownload) { Response.AppendHeader("content-disposition", "attachment; filename=" + name.Replace(" ", "_")); } if (type != "") { Response.ContentType = type; } else { Response.ContentType = "application/x-msdownload"; } System.IO.Stream iStream = null; // Buffer to read 10K bytes in chunk: byte[] buffer = new Byte[10000]; // Length of the file: int length; // Total bytes to read: long dataToRead; try { // Open the file. iStream = new System.IO.FileStream(path, System.IO.FileMode.Open, System.IO.FileAccess.Read, System.IO.FileShare.Read); // Total bytes to read: dataToRead = iStream.Length; //Response.ContentType = "application/octet-stream"; //Response.AddHeader("Content-Disposition", "attachment; filename=" + filename); // Read the bytes. while (dataToRead > 0) { // Verify that the client is connected. if (Response.IsClientConnected) { // Read the data in buffer. length = iStream.Read(buffer, 0, 10000); // Write the data to the current output stream. Response.OutputStream.Write(buffer, 0, length); // Flush the data to the HTML output. Response.Flush(); buffer = new Byte[10000]; dataToRead = dataToRead - length; } else { //prevent infinite loop if user disconnects dataToRead = -1; } } } catch (Exception ex) { // Trap the error, if any. Response.Write("Error : " + ex.Message); } finally { if (iStream != null) { //Close the file. iStream.Close(); } Response.Close(); } }

    Read the article

  • IE Problem: Jagged Scrolling and Dragging Inside Large Viewport

    - by br4inwash3r
    My site is a single page website with a very large "canvas" size. and to navigate around the site i'm using jquery scrollTo and jquery Dragscrollable plugin. in IE 7 & 8 the scrolling/dragging movement is very jagged. at first i thought it was my script or some other plugin that's causing this. but after i stripped down everything it's still the same. i've tried a few tips i've found around here. but none is really working for me. i know i should be asking this question to the plugin developers. i did.. i just thought maybe you guys have some other solution for this issue. here's the URL to the demo site: http://satuhati.com/bare/template/ really appreciate any help u can give :) thx..

    Read the article

  • Speeding up inner joins between a large table and a small table

    - by Zaid
    This may be a silly question, but it may shed some light on how joins work internally. Let's say I have a large table L and a small table S (100K rows vs. 100 rows). Would there be any difference in terms of speed between the following two options?: OPTION 1: OPTION 2: --------- --------- SELECT * SELECT * FROM L INNER JOIN S FROM S INNER JOIN L ON L.id = S.id; ON L.id = S.id; Notice that the only difference is the order in which the tables are joined. I realize performance may vary between different SQL languages. If so, how would MySQL compare to Access?

    Read the article

  • C#: Efficiently search a large string for occurences of other strings

    - by Jon
    Hi, I'm using C# to continuously search for multiple string "keywords" within large strings, which are = 4kb. This code is constantly looping, and sleeps aren't cutting down CPU usage enough while maintaining a reasonable speed. The bog-down is the keyword matching method. I've found a few possibilities, and all of them give similar efficiency. 1) http://tomasp.net/articles/ahocorasick.aspx -I do not have enough keywords for this to be the most efficient algorithm. 2) Regex. Using an instance level, compiled regex. -Provides more functionality than I require, and not quite enough efficiency. 3) String.IndexOf. -I would need to do a "smart" version of this for it provide enough efficiency. Looping through each keyword and calling IndexOf doesn't cut it. Does anyone know of any algorithms or methods that I can use to attain my goal?

    Read the article

< Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >