Search Results

Search found 44076 results on 1764 pages for 'large text'.

Page 83/1764 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • Large svn external

    - by MPelletier
    I have a project which uses a large library residing in its own repository. Using: Tortoise-SVN, the server is running an enterprise edition of VisualSVN The project itself has the "standard" structure: trunk tags branches In each branch, tag, and trunk is the library, set as an external (svn:external property). If I get the entire tree, I get the library several times, which is just getting too ridiculously repetitive. Is there a recommended structure for this? Or perhaps a way not to get all externals (because other externals are much smaller, easier to manipulate)?

    Read the article

  • Large XML files in dataset (outofmemory)

    - by dklein
    Hi folks, I am currently trying to load a slightly large xml file into a dataset. The xml file is about 700 MB and every time I try to read the xml it needs plenty of time and after a while it throws an "out of memory" exception. DataSet ds = new DataSet(); ds.ReadXml(pathtofile); The main problem is, that it is necessary for me to use those datasets (I use it to import the data from xml file into a sybase database (foreach table, foreach row, foreach column)) and that I have no scheme file. I already googled a while, but I did only find solutions that won't be usable for me.

    Read the article

  • System.Overflow Exception - int32 is too large or small

    - by LonnieBest
    I need a little advice. I've got windows service that runs at night. In my development environment, it runs without exception, but when I running it "installed on other machines", when I come in the morning, I'm welcomed with a System.Overflow exception that says that I've set an int32 to value that is too large or small. I've carefully combed the service's c# code, and I have try/catch statements around everything, that should catch any error and write it to a log without completely stopping my service with this overflow exception. But still, it occurs and stops the service. I'd appreciate any conceptual advice on how to pin point what's causing an error such as this.

    Read the article

  • "code too large" compilation error in java

    - by trinity
    Hello all, Is there any maximum size for code in java.. i wrote a function with more than 10,000 lines. Actually , each line assigns a value to an array variable.. arts_bag[10792]="newyorkartworld"; arts_bag[10793]="leningradschool"; arts_bag[10794]="mailart"; arts_bag[10795]="artspan"; arts_bag[10796]="watercolor"; arts_bag[10797]="sculptures"; arts_bag[10798]="stonesculpture"; And while compiling , i get this error : code too large How do i overcome this ?

    Read the article

  • How to organize a large number of objects

    - by shane
    We have a large number of documents and metadata (xml files) associated with these documents. What is the best way to organize them? Currently we put them into a series of nested folders: /repository/category/date(when they were loaded into our db)/document_number.pdf and .xml We use the path as a unique identifier for the document in our system. This is more versatile than putting them all in a single flat folder. also it is independent from our database/application, so we can reload them in case of failure. Yet, it introduces some limitations. for example we can't move the files once they've been placed in this structure, also it takes work to put them this way. What is the best practice? How websites such as Scribd deal with this problem?

    Read the article

  • finding a string of random characters (with possible errors) within a large string of random charact

    - by mike
    I am trying to search a large string w/o spaces for a smaller string of characters. using regex I can easily find perfect matches but I can't figure out how to find partial matches. by partial matches i mean one or two extra characters in the string or one or two characters that have been changed, or one of each. the first and last characters will always match though. this would be similar to a spell checker but there are no spaces and the strings dont contain actual words, just random hex digits. i figured a way to find the string if there are no extra characters using indexOf(string.charAt(0)) and indexOf(charAt(string.length()-1) and looping through the characters between the two indexes. but this can be problematic when dealing with randomized characters because of the possibility of finding the first and last characters at the correct spacing but none of the middle characters matching. i've been scratching my head for hours on this issue. any ideas?

    Read the article

  • Indy FTP, large files and NAT routers

    - by Lobuno
    Hello! I have been using Indy to transfers files via FTP for years now but have not been able to find a satisfactory solution for the following problem. When a user is uploading a large file, behind a router, sometimes the following happens: the file is uploaded OK, but under the mean time the command channel gets disconnected because of a timeout. Normally this doesn't happens with a direct connection to the server, because the server "knows" that a transfer is being taking place on the data channel. Some routers are not aware of this, though and the command channel is closed. Many programs send a NOOP command periodically to keep the command channel alive even if this is not part of the standard FTP specification. My question: how do I do that? Do I send the NOOP command in the OnWork event? Does this cause any collateral damage in some way, like, do I need to process some response? How do I best solve this problem?

    Read the article

  • Free Large datasets to experiment with Hadoop

    - by Sundar
    Do you know any large datasets to experiment with Hadoop which is free/low cost? Any pointers/links related is appreciated. Prefernce: Atleast one GB of data. Production log data of webserver. Few of them which I found so far: http://dumps.wikimedia.org/enwiki/20100130/ http://wiki.freebase.com/wiki/Data_dumps http://aws.amazon.com/publicdatasets/ Also can we run our own crawler to gather data from sites e.g. Wikipedia? Any pointers on how to do this is appreciated as well.

    Read the article

  • Doing a large number of upserts as fast as possible

    - by Jason Swett
    My app (which uses MySQL) is doing a large number of subsequent upserts. Right now my SQL looks like this: INSERT IGNORE INTO customer (name,customer_number,social_security_number,phone) VALUES ('VICTOR H KINDELL','123','123','123') INSERT IGNORE INTO customer (name,customer_number,social_security_number,phone) VALUES ('VICTOR H KINDELL','123','123','123') INSERT IGNORE INTO customer (name,customer_number,social_security_number,phone) VALUES ('VICTOR H KINDELL OR','123','123','123') INSERT IGNORE INTO customer (name,customer_number,social_security_number,phone) VALUES ('TRACY L WALTER PERSONAL REP FOR','123','123','123') INSERT IGNORE INTO customer (name,customer_number,social_security_number,phone) VALUES ('TRACY L WALTER PERSONAL REP FOR','123','123','123') So far I've found INSERT IGNORE to be the fastest way to achieve upserts. Selecting a record to see if it exists and then either updating it or inserting a new one is too slow. Even this is not as fast as I'd like because I need to do a separate statement for each record. Sometimes I'll have around 50,000 of these statements in a row. Is there a way to take care of all of these in just one statement, without deleting any existing records?

    Read the article

  • serving large file using select, epoll or kqueue

    - by xask
    Nginx uses epoll, or other multiplexing techniques(select) for its handling multiple clients, i.e it does not spawn a new thread for every request unlike apache. I tried to replicate the same in my own test program using select. I could accept connections from multiple client by creating a non-blocking socket and using select to decide which client to serve. My program would simply echo their data back to them .It works fine for small data transfers (some bytes per client) The problem occurs when I need to send a large file over a connection to the client. Since i have only one thread to serve all client till the time I am finished reading the file and writing it over to the socket i cannot resume serving other client. Is there a known solution to this problem, or is it best to create a thread for every such request ?

    Read the article

  • latex large division sign in a math formula

    - by Anna
    Hi, I have been looking for an answer for some time now, hope you could give me a quick tip. I have an equation with many divisions inside. i.e: $\frac{\frac{a_1}{a_2}} {\frac{b_1}{b_2}}$ To make it more readable, I decided to change the large fraction into "/" sign. i.e. $\frac{a_1}{a_2} / \frac{b_1}{b_2}$ The problem is that the "/" sign remains small, and it is quite ugly. How do I change the "/" sign to have a big font? How do I make it more readable? Thanks.

    Read the article

  • SQL Server 2000 tables

    - by klork
    We currently have an SQL Server 2000 database with one table containing data for multiple users. The data is keyed by memberid which is an integer field. The table has a clustered index on memberid. The table is now about 200 million rows. Indexing and maintenance are becoming issues. We are debating splitting the table into one table per user model. This would imply that we would end up with a very large number of tables potentially upto the 2,147,483,647, considering just positive values. My questions: Does anyone have any experience with a SQL Server (2000/2005) installation with millions of tables? What are the implications of this architecture with regards to maintenance and access using Query Analyzer, Enterprise Manager etc. What are the implications to having such a large number of indexes in a database instance. All comments are appreciated. Thanks

    Read the article

  • Converting a large SQL Server Database to Azure Storage

    - by Laith
    Hi guys, I have a very large database structure, (Data is not important at this point, I can migrate the info in the db pretty easily if the structure is done) , all reside in SQL Server and I even published it to SQL Azure, but thinking about the limitation of SQL Azure in size, made me decide to switch most of the tables that do not need all the bells and whistles of SQL Azure to Azure Table and blob storage. I was thinking of creating a TT template that dose that, but was wondering if their is a tool that do that. Any ideas or thoughts. The only tables that i would keep in SQL Azure would anything related to transactions like payments. Appreciate your thoughts and advice

    Read the article

  • Implement date picker and time picker in button click and store in edit text boxes

    - by user3597791
    Hi I am trying to implement a date picker and time picker in button click and store in edit text boxes. I have tried numerous things but since i suck at coding I cant get any of them to work. Please find my class and xml below and i would be grateful for any help import android.app.Activity; import android.content.Intent; import android.database.Cursor; import android.graphics.BitmapFactory; import android.net.Uri; import android.os.Bundle; import android.provider.MediaStore; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.widget.EditText; import android.widget.ImageView; import android.widget.Toast; public class NewEvent extends Activity { private static int RESULT_LOAD_IMAGE = 1; private EventHandler handler; private String picturePath = ""; private String name; private String place; private String date; private String time; private String photograph; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.new_event); handler = new EventHandler(getApplicationContext()); ImageView iv_user_photo = (ImageView) findViewById(R.id.iv_user_photo); iv_user_photo.setOnClickListener(new OnClickListener() { @Override public void onClick(View arg0) { Intent intent = new Intent(Intent.ACTION_PICK, android.provider.MediaStore.Images.Media.EXTERNAL_CONTENT_URI); startActivityForResult(intent, RESULT_LOAD_IMAGE); } }); Button btn_add = (Button) findViewById(R.id.btn_add); btn_add.setOnClickListener(new OnClickListener() { @Override public void onClick(View arg0) { EditText et_name = (EditText) findViewById(R.id.et_name); name = et_name.getText().toString(); EditText et_place = (EditText) findViewById(R.id.et_place); place = et_place.getText().toString(); EditText et_date = (EditText) findViewById(R.id.et_date); date = et_date.getText().toString(); EditText et_time = (EditText) findViewById(R.id.et_time); time = et_time.getText().toString(); ImageView iv_photograph = (ImageView) findViewById(R.id.iv_user_photo); photograph = picturePath; Event event = new Event(); event.setName(name); event.setPlace(place); event.setDate(date); event.setTime(time); event.setPhotograph(photograph); Boolean added = handler.addEventDetails(event); if(added){ Intent intent = new Intent(NewEvent.this, MainEvent.class); startActivity(intent); }else{ Toast.makeText(getApplicationContext(), "Event data not added. Please try again", Toast.LENGTH_LONG).show(); } } }); } @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if (requestCode == RESULT_LOAD_IMAGE && resultCode == RESULT_OK && null != data) { Uri imageUri = data.getData(); String[] filePathColumn = { MediaStore.Images.Media.DATA }; Cursor cursor = getContentResolver().query(imageUri, filePathColumn, null, null, null); cursor.moveToFirst(); int columnIndex = cursor.getColumnIndex(filePathColumn[0]); picturePath = cursor.getString(columnIndex); cursor.close(); Here is my xml: <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" android:padding="10dp"> <TextView android:id="@+id/tv_new_event_title" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Add New Event" android:textAppearance="?android:attr/textAppearanceLarge" android:layout_alignParentTop="true" /> <Button android:id="@+id/btn_add" android:layout_width="match_parent" android:layout_height="wrap_content" android:text="Add Event" android:layout_alignParentBottom="true" /> <ScrollView android:layout_width="match_parent" android:layout_height="match_parent" android:layout_below="@id/tv_new_event_title" android:layout_above="@id/btn_add"> <LinearLayout android:layout_width="fill_parent" android:layout_height="wrap_content" android:orientation="vertical"> <ImageView android:id="@+id/iv_user_photo" android:src="@drawable/add_user_icon" android:layout_width="100dp" android:layout_height="100dp"/> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Event:" /> <EditText android:id="@+id/et_name" android:layout_width="match_parent" android:layout_height="wrap_content" android:ems="10" android:inputType="text" > <requestFocus /> </EditText> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Place:" /> <EditText android:id="@+id/et_place" android:layout_width="match_parent" android:layout_height="wrap_content" android:ems="10" android:inputType="text" > <requestFocus /> </EditText> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Date:" /> <EditText android:id="@+id/et_date" android:layout_width="match_parent" android:layout_height="wrap_content" android:ems="10" android:inputType="date" /> <Button android:id="@+id/button" style="?android:attr/buttonStyleSmall" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Button" /> <requestFocus /> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Time:" /> <EditText android:id="@+id/et_time" android:layout_width="match_parent" android:layout_height="wrap_content" android:ems="10" android:inputType="time" /> <Button android:id="@+id/button1" style="?android:attr/buttonStyleSmall" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Button1" /> <requestFocus /> </LinearLayout> </ScrollView> </RelativeLayout>

    Read the article

  • OutOfMemoryException Processing Large File

    - by Krip
    We are loading a large flat file into BizTalk Server 2006 (Original release, not R2) - about 125 MB. We run a map against it and then take each row and make a call out to a stored procedure. We receive the OutOfMemoryException during orchestration processing, the Windows Service restarts, uses full 2 GB memory, and crashes again. The server is 32-bit and set to use the /3GB switch. Also I've separated the flow into 3 hosts - one for receive, the other for orchestration, and the third for sends. Anyone have any suggestions for getting this file to process wihout error? Thanks, Krip

    Read the article

  • Optimizing a large iteration of PHP objects (EAV-based)

    - by Aron Rotteveel
    I am currently working on a project that utilizes the EAV model. This turns out to work quite well, but like many others I am now stumbling upon some performance issues. The data set in this particular case consists of aproximately 2500 entities, each with aprox. 150 attributes. Each entity and each attribute is represented by a PHP-object. Since most parts of the application only iterate through a filtered set of entities, we have not had very large issues yet. Now, however, I am working on an algorithm that requires iteration over the entire dataset, which causes a major impact on performance. This information is perhaps not very much to work with, but since this is an architectural problem, I am hoping for a architectural pattern to help me on the way as well. Each entity, including it's attributes takes up aprox. 500KB of memory.

    Read the article

  • Making large toolbars like the iPod app

    - by andybee
    I am trying to create a toolbar programatically (rather than via IB) very similar to the toolbar featured in the iPhone app. Currently I've been experimenting with the UIToolbar class, but I'm not sure how (and if?) you can make the toolbar buttons centrally aligned and large like that in the iPod app. Additionally, regardless of size, the gradient/reflection artwork never correctly respects the size and is stuck as if the object is the default smaller size. If this cannot be done with a standard UIToolbar, I guess I need to create my own view. In this case, can the reflection/gradient be created programmatically or will it require some clever alpha tranparency Photoshopped artwork?

    Read the article

  • Request header is too large

    - by stck777
    I found serveral IllegalStateException Exception in the logs: [#|2009-01-28T14:10:16.050+0100|SEVERE|sun-appserver2.1|javax.enterprise.system.container.web|_ThreadID=26;_ThreadName=httpSSLWorkerThread-80-53;_RequestID=871b8812-7bc5-4ed7-85f1-ea48f760b51e;|WEB0777: Unblocking keep-alive exception java.lang.IllegalStateException: PWC4662: Request header is too large at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:740) at org.apache.coyote.http11.InternalInputBuffer.parseHeader(InternalInputBuffer.java:657) at org.apache.coyote.http11.InternalInputBuffer.parseHeaders(InternalInputBuffer.java:543) at com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.parseRequest(DefaultProcessorTask.java:712) at com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.doProcess(DefaultProcessorTask.java:577) at com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.process(DefaultProcessorTask.java:831) at com.sun.enterprise.web.connector.grizzly.DefaultReadTask.executeProcessorTask(DefaultReadTask.java:341) at com.sun.enterprise.web.connector.grizzly.DefaultReadTask.doTask(DefaultReadTask.java:263) at com.sun.enterprise.web.connector.grizzly.DefaultReadTask.doTask(DefaultReadTask.java:214) at com.sun.enterprise.web.portunif.PortUnificationPipeline$PUTask.doTask(PortUnificationPipeline.java:380) at com.sun.enterprise.web.connector.grizzly.TaskBase.run(TaskBase.java:265) at com.sun.enterprise.web.connector.grizzly.ssl.SSLWorkerThread.run(SSLWorkerThread.java:106) |#] Does anybody know configuration changes to fix this?

    Read the article

  • design a large scale network for an organization

    - by Essam
    hello.i am so new to networking i want to design a large scale network for an organization with HQ and two branches. i want to use class A address for that.my questions are: if i am using the network address 30.0.0.0 for the whole organization how can it be different from another organization company or whatever which is using the same address in another country? now i have the three locations for this organization,so i need 5 subnets [one for the HQ,two for branch A and branch B , one for connecting A to HQ and one for connecting branch B with HQ since i will use central DHCP server at the HQ,is that(number of subnetting) right? is it advisable to use class A or class B for this organization it term of address that will be wasted (lets say it is a university with two branches in two different states)?! that is all your help is highly appropriated.

    Read the article

  • IE Problem: Jagged Scrolling and Dragging Inside Large Viewport

    - by br4inwash3r
    My site is a single page website with a very large "canvas" size. and to navigate around the site i'm using jquery scrollTo and jquery Dragscrollable plugin. in IE 7 & 8 the scrolling/dragging movement is very jagged. at first i thought it was my script or some other plugin that's causing this. but after i stripped down everything it's still the same. i've tried a few tips i've found around here. but none is really working for me. i know i should be asking this question to the plugin developers. i did.. i just thought maybe you guys have some other solution for this issue. here's the URL to the demo site: http://satuhati.com/bare/template/ really appreciate any help u can give :) thx..

    Read the article

  • How can a large number of developers write software together without either a cumbersome process or

    - by Mark Robinson
    I work at a company with hundreds of people writing software for essentially the same product. The quality of the software has to be high because so many people depend on it (not least the developers themselves). Because of this every major issue has resulted in a new check - either automated or manual. As a result the process of delivering software is becoming ever more burdensome. So that requires more developers which... well you can see it is a vicious circle. We now have a problem with releasing software quickly - the lead time even to change one line of code for a very serious issue is at least one day. What techniques do you use to speed up the delivery of software in a large organization, while still maintaining software quality?

    Read the article

  • How to quickly analyse large MDB file

    - by Craig Johnston
    I need to know how to quickly analyse a large MDB file (about 1GB) to see which tables are causing it to be so big. Is there something will easily allow me to show a breakdown of which tables are responsible for how much data. I need to know whether it is just this one customer who is using the application differently, or whether there is genuinely a lot of data in the MDB. This MDB is currently causing the VB app to crash, and I need to know why it is so big so that I can maybe think about how to put some of the data into another 'archival' MDB. Migrating to SQL Server is not an option, unless the use of linked tables from an MDB is a realistic option.

    Read the article

  • OutOfMemoryException, stack size is huge, large number of threads

    - by Captain Comic
    Hello, I was profiling my .net windows service. I was trying to discover OutOfMemoryException and discovered that my stack size is huge and is growing because the the number of threads keeps growing. Each thread gets 1024 KB on Windows x64 machine. Thus when my app has 754 threads the stack size would be 772 MB. The problem for me is that i don't know where these thread come from. Initially my app has a very limited number of threads and they keep growing with time. I have two suspicions - either these threads are created by WCF or by database connection. My application uses both WCF and datasets. Also I tried to profile my app in Ants do Trace i can see large number of System.ServiceModel.Channels.ClientReliableDuplexSessionChannel and this number is increasing with time. I can see thousands of these objects created. So what I want to know is who is creating threads (tools to discover, profilers) and if it is WCF who is creating these threads.

    Read the article

  • Large number of Google Map Markers and IE6?

    - by Sivakanesh
    I'm working on an application that generates a large number of Google Map markers (2000 - 7000) via JSON. I'm also using MarkerCluster. It works quick on Chrome and FF but IE6 takes few minutes and just crashes the first time I try to zoom in. I'm not doing any more than just adding the markers to a map using JQuery & GMap API. So I looked at the following URL of the regular Google Map. http://maps.google.co.uk/maps?f=q&source=s_q&hl=en&q=hotel&sll=53.182996,-2.581787&sspn=1.494529,4.927368&ie=UTF8&split=1&rq=1&ev=p&hq=hotel&hnear=&ll=53.123702,-2.730103&spn=1.496594,4.927368&t=h&z=8 It shows a lot of tiny markers (~1000) and works fine on IE6. Do you have any ideas why this works and the markers added via the API struggles? Thanks

    Read the article

  • mmap() for large file I/O?

    - by Boatzart
    I'm creating a utility in C++ to be run on Linux which can convert videos to a proprietary format. The video frames are very large (up to 16 megapixels), and we need to be able to seek directly to exact frame numbers, so our file format uses libz to compress each frame individually, and append the compressed data onto a file. Once all frames are finished being written, a journal which includes meta data for each frame (including their file offsets and sizes) is written to the end of the file. I'm currently using ifstream and ofstream to do the file i/o, but I am looking to optimize as much as possible. I've heard that mmap() can increase performance in a lot of cases, and I'm wondering if mine is one of them. Our files will be in the tens to hundreds of gigabytes, and although writing will always be done sequentially, random access reads should be done in constant time. Any thoughts as to whether I should investigate this further, and if so does anyone have any tips for things to look out for? Thanks!

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >