Search Results

Search found 13928 results on 558 pages for 'large scale nat'.

Page 54/558 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • Technical reasons for not having large background images in websites

    - by kees-kist
    Most websites tend to have either a solid color as background, or a small image that is repeated. Why aren't more websites using a large image (such as a photo) as background? I can think of the following reasons: 1) Problems with different screen resolutions. Too small and gaps start to appear on the left and/or right side for higher resolutions, too big and lower resolutions only show part of the image. 2) Bandwidth. Although this is unlikely to be a problem for most websites. Are there any other reasons why such backgrounds are not being used more often?

    Read the article

  • slow record deletion with large ntext values

    - by asking
    I'm having trouble deleting some records via a stored procedure from a table in SQLServer 2008R2 that has ntext columns. The stored proc is timing out and running the query directly takes a very long time. The initial query was a straight "delete from y where x = z" and I've also tried running it in batches of 1000 with transactions but it is still slow and timing out in a stored proc. The majority of the records in the table will not be deleted each time (it's not just a once-off query but will be run other times). The ntext columns are not used in the where clause and I can't change the column types. Any suggestions on the quickest way to delete records with large ntext values? Thanks

    Read the article

  • POST data disapearing on large file upload

    - by DfKimera
    I'm having issues with a file uploading utility in my PHP application. When sending large files (9MB+) over the form, I get a very odd behaviour: the POST data I've included in the form dissapears, including the file information. I've already increased all PHP limits I could (time limit, max input time, post max size, memory limit and upload max filesize) and I still can't get the proper behaviour. I've tried replacing the regular HTTP forms with a Flash-based solution (SWFUpload, www.swfupload.org), still the same behaviour. I've tried multiple files of similar sizes and its definitely not a particular file issue. I've debugged the POST vars sent using Firebug, and the correct variables are still there in the header, together with the file. What could be going on here?

    Read the article

  • How to execute a large PHP Script ?

    - by atif089
    Well basically I may want to execute a script that may take as much as 1 hours as well. What I really want to do is Send SMS to my users using a third party API. So its basically like I supply my script with an array of phone numbers and fire the method to send SMS. However assuming it take 5 seconds to send 1 SMS and I want to send 1000 SMS which is roughly 1 - 2 hours. I can use set_time_limit() because I am on shared host. One way to do this is store numbers in a session and execute each SMS and use javascript to refresh that page until end. This way I need to keep my browser open and the execution will stop if my Internet Connection is disconnected. So, Is there any better way to do this ? Hope I am clear enough to explain what I want? I want to execute a large script that may take hours to execute without getting timeout.

    Read the article

  • rails large amount of data in single insert activerecord gave out

    - by Nik
    So I have I think around 36,000 just to be safe, a number I wouldn't think was too large for a modern sql database like mysql. Each record has just two attributes. So I do: so I collected them into one single insert statement sql = "INSERT INTO tasks (attrib_a, attrib_b) VALUES (c1,d1),(c2,d2),(c3,d3)...(c36000,d36000);" ActiveRecord::Base.connection.execute sql from C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract_adapter.rb:219:in `log' from C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:323:in `execute_without_analyzer from c:/r/projects/vendor/plugins/rails-footnotes/lib/rails-footnotes/notes/queries_note.rb:130:in `execute' from C:/Ruby/lib/ruby/1.8/benchmark.rb:308:in `realtime' from c:/r/projects/vendor/plugins/rails-footnotes/lib/rails-footnotes/notes/queries_note.rb:130:in `execute' from (irb):53 from C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/vendor/tzinfo-0.3.12/tzinfo/time_or_datetime.rb:242 I don't know if the above info is enough, please do ask for anything that I didn't provide here. So any idea what this is about? THANK YOU!!!!

    Read the article

  • Large file download for a Rails project

    - by Horace Ho
    One client project will be online two months later. One of the requirements changed is to support large files (10 to 15MB per RAW camera file, expected 1000 to 5000 files download per day) download worldwide for their customers. The process will be: there is upload screen via paperclip to the rails local public folder a hourly task to upload to web storage (S3?) update the download url from paperclip url to the web url Questions: is there a gem/plug-in for this purpose? if no, any gem/plug-in for S3 to recommend? Questions about the storage provider: is S3 recommended? or other service to recommend? The baseline is: the client's web server does not and will not have the bandwidth to handle the downloads. Thanks

    Read the article

  • Enable export to XML via HTTP on a large number of models with child relations

    - by Vasil
    I've a large number of models (120+) and I would like to let users of my application export all of the data from them in XML format. I looked at django-piston, but I would like to do this with minimum code. Basically I'd like to have something like this: GET /export/applabel/ModelName/ Would stream all instances of ModelName in applabel together with it's tree of related objects . I'd like to do this without writing code for each model. What would be the best way to do this?

    Read the article

  • How to transfer large files from desktop to server ( .NET)

    - by rahulchandran
    I am writing a .NET 2.0 based desktop client that will send large files ( well largish under 2GB) to a server. Need to develop the server as well. Server can be on any technology It should be secure so an underlying SSL stream is needed What are my options. Any obvious caveats etc I should be aware of To my mind the simplest solution is to open a tcp\ip connection over SSL to the server and send n packets each of size M bytes and then have the server append the chunks to the file and finally send an EOF packet as well IS this horrible. Will the perf suck on the server with all these disk writes What are any other clever options. I am limited to .NET 2.0 on the client if I did move to a WCF client will it buy be something magical and cool for this scenario Thanks

    Read the article

  • Paging a UIScrollView with a large PDF

    - by Fousa
    I try to create a simple UIScrollView with paging. And I want to be able to scroll through a large PDF document, but this gives me some problems... I tried the following options: Convert all the PDF pages to UIImages at startup, this works, but is very slow on start Manually drawing the PDF page in the drawRect, but yet again this was slow... And I prefer not to load everything at startup but to do it during the usage. Did anyone did this recently? Can't seem to find a nice example project. Thnx! Jelle

    Read the article

  • namespacing large javascript like jquery

    - by frenchie
    I have a very large javascript file: it's over 9,000 lines. The code looks like this: var GlobalVar1 = ""; var GlobalVar2 = null; function A() {...} function B(SomeParameter) {...} I'm using the google compiler and the global variables and functions get renamed a,b,c... and there's a good change that there might be some collision later with some outside code. What I want to do is have my code organized like the jquery library where everything is accessible with $. Is there a way to namespace my code so that everything is behind a # character for example. I'd like to have this to call my code: #.GlobalVar #.functionA(SomeParameter) How can I do this? Thanks.

    Read the article

  • Calculating very large exponents in python

    - by miraclesoul
    Dear All, Currently i am simulating my cryptographic scheme to test it. I have developed the code but i am stuck at one point. I am trying to take : g**x where g = 256 bit number x = 256 bit number Python hangs at this point, i have read alot of forums, threads etcc but only come to the conclusion that python hangs, as its hard for it to process such large numbers. any idea how can it be done? any two line piece of code, any library, anything that can be done.(ALSO PLEASE I AM A NEW PYTHON USER AND THIS IS FIRST TIME I DID PROGRAMMING IN IT, SO NO COMPLEX METHODS ...HOPE YOU UNDERSTAND :s)

    Read the article

  • Redirecting a large number of URLs with htaccess or php header

    - by Peter
    I have undergone a major website overhaul and now have 5,000+ incoming links from search engines and external sites, bookmark services etc that lead to dead pages or 404 errors. A lot of the pages have corresponding "permalinks" or known replacement hierarchy/URL structure. I've started to list the main redirects with htaccess or physical files with simply a header location reidrect which is clearly not sustainable! What would be the best method to list all of the old link addresses and their corresponding new addresses with htaccess, php headers, mysql, sitemap file or is it better to have all broken links and wait for search engines etc to re-index my site? Are there any implications for having a large number of redirecting files for this temporary period until links are reset?

    Read the article

  • Region or ItemsSource for large data set in ListBox

    - by Ryan
    I'm having trouble figuring out what the best solution is given the following situation. I'm using Prism 4.1, MEF, and .Net 4.0. I have an object Project that could have a large number (~1000) of Line objects. I'm deciding whether it is better to expose an ObservableCollection<LineViewModel> from my ProjectViewModel and manually create the Line viewmodels there OR set the ListBox as it's own region and activate views that way. I'd still want my LineViewModel to have Prism's shared services (IEventAggregator, etc.) injected, but I don't know how to do that when I manually create the LineViewModel. Any suggestions or thoughts?

    Read the article

  • How to efficiently deal with a large amount of HTML5 canvas pixel data over websockets

    - by user730569
    Using imageData = context.getImageData(0, 0, width, height); JSON.stringify(imageData.data); I grab the pixel data, convert it to a string, and then send it over the wire via websockets. However, this string can be pretty large, depending on the size of the canvas object. I tried using the compression technique found here: JavaScript implementation of Gzip but socket.io throws the error Websocket message contains invalid character(s). Is there an effective way to compress this data so that it can be sent over websockets?

    Read the article

  • Errors with large data sources

    - by The Sheek Geek
    I'm doing some benchmarking on large data sources and binding/exporting data for reporting. I started with using a data set, filling it with 100000 rows and then attempting to open a crystal report with the retrieved data. I noticed that the data set filled just fine (took about 779 milliseconds) however, when attempting to export the data to the report or even bind to a gridview the application would fail with an OutOfMemoryException. Does anyone experienced this before or have an idea of how to get around it? It is very possible that clients will run reports for years worth of data and 100000 rows are not inconceivable. The application and the benchmark code are written in C# using ORACLE and SQL Server databases. I still have some data sources to test, but would like to know how to get around this just in case I don't find a better solution.

    Read the article

  • How to retrieve large data from oracle database using vbscript

    - by allenzzzxd
    Hi guys, I'm now working on vbscript to do some test. Actuelly, I want to retrieve a large amount of data from an oracle database, so I write the code like this: sql = "Select * from CORE_DB where MC = '" & mstr & "' " Set myrs = db_execute_query(curConnection, sql) Then I count the rows in myrs,there are 248 rows. So then I do a For loop to retrieve some fields of each row. For k = 0 To db_get_rows_count(myrs) But then I found that the content of the row k when k 133 was always equal to k = 133. So this makes an error. As I think, there may be a limit size of mrys ? Could anyone light me about this? Thanks a lot in advance

    Read the article

  • Large number of tables and Hibernate memory consumption

    - by Vedran
    I'm working on a large ERP project which has database model with about 2100 tables. With "only" 500 tables mapped with Hibernate, application deployed on the web server takes about 3GB of working memory. Is there any way to reduce Hibernate's metamodel memory footprint when using that many tables in one persistence unit? Or should I just give up on ORMs and go with plain old JDBC (or even jOOQ)? Right now I'm using Hibernate 4.1.8, Spring 3.1.3, JBoss AS 7.1 and working with MSSQL database. Edit: JavaMelody memory histogram output - with 2000 generated test tables that are a bit smaller in scope from the original db model (hence 'only' 1.3GB of spent memory)

    Read the article

  • Performance considerations of a large hard-coded array in the .cs file

    - by terence
    I'm writing some code where performance is important. In one part of it, I have to compare a large set of pre-computed data against dynamic values. Currently, I'm storing that pre-computed data in a giant array in the .cs file: Data[] data = { /* my data set */ }; The data set is about 90kb, or roughly 13k elements. I was wondering if there's any downside to doing this, as opposed to loading it in from an external file? I'm not entirely sure how C# works internally, so I just wanted to be aware of any performance issues I might encounter with this method.

    Read the article

  • Unable to return large result set ORA-22814

    - by rvenugopal
    Hello All I am encountering an issue when I am trying to load a large result set using a range query in Oracle 10g. When I try a smaller range (1 to 100), it works but when I try a larger range(1 and 1000), I get the following error "ORA-22814: attribute or element value is larger than specified in type" error. I have a basic UDT (PostComments_Type) and I have tried using both a VArray and a Table type of PostComments_Type but that hasn't made a difference. Your help is appreciated --Thanks Venu PROCEDURE RangeLoad ( floorId IN NUMBER, ceilingId IN NUMBER, o_PostComments_LARGE_COLL_TYPE OUT PostComments_LARGE_COLL_TYPE -- Tried using as VArray and also Table type of PostComments_Type )IS BEGIN SELECT PostComments_TYPE ( PostComments_ID, ... ) BULK COLLECT INTO o_PostComments_LARGE_COLL_TYPE ------------This is for VARRAY/Table Type. So bulk operation FROM PostComments WHERE PostComments_ID BETWEEN floorId And ceilingId; END RangeLoad;

    Read the article

  • Using pow() for large number

    - by g4ur4v
    I am trying to solve a problem, a part of which requires me to calculate (2^n)%1000000007 , where n<=10^9. But my following code gives me output "0" even for input like n=99. Is there anyway other than having a loop which multilplies the output by 2 every time and finding the modulo every time (this is not I am looking for as this will be very slow for large numbers). #include<stdio.h> #include<math.h> #include<iostream> using namespace std; int main() { unsigned long long gaps,total; while(1) { cin>>gaps; total=(unsigned long long)powf(2,gaps)%1000000007; cout<<total<<endl; } }

    Read the article

  • Mysql Master Slave Replication on Large Database table (how to sync initial data)

    - by Brian Lovett
    We have a production server and a dev server. We have found that backups are nearly impossible on the production server because of the query volume we experience. So, we're looking at setting up replication with our dev server being the slave. This is ideal because we can afford to lock the tables on that server and additionally it will be nice to have up to date data for the developers. Now, the issues. The production server can't really be taken down or locked at this point, at least not easily. We have a high query volume and fairly large 30+ GB innodb tables. Both servers are running all innodb and are also both on mysql 5.1. What can we do to sync the data initially to get replication started? I've tried a few options, but so far, none have worked.

    Read the article

  • Are there version control systems that allow you to permanently delete files?

    - by Andrea Francia
    I need to keep under version some large files (some Gigs). I don't need, and I can't keep under version all the version of the files. I want to be able to remove from my VCS large files version at some moment. What control version system could I use? EDIT: The files that I want to keep under version control are big .zip files or ISO images. These files may contains executable software or data (seismic data, SAR images, GNSS data) and they are provided by the software supplier of my company.

    Read the article

  • Transforming large Xml files

    - by Chad
    I was using this extension method to transform very large xml files with an xslt. Unfortunately, I get an OutOfMemoryException on the source.ToString() line. I realize there must be a better way, I'm just not sure what that would be? public static XElement Transform(this XElement source, string xslPath, XsltArgumentList arguments) { var doc = new XmlDocument(); doc.LoadXml(source.ToString()); var xsl = new XslCompiledTransform(); xsl.Load(xslPath); using (var swDocument = new StringWriter(System.Globalization.CultureInfo.InvariantCulture)) { using (var xtw = new XmlTextWriter(swDocument)) { xsl.Transform((doc.CreateNavigator()), arguments, xtw); xtw.Flush(); return XElement.Parse(swDocument.ToString()); } } } Thoughts? Solutions? Etc.

    Read the article

  • how to restore very large .bak file (180 GB) in SQL Server 2008

    - by Umutos
    Hello! I have a verly large .bak file (180 GB) which was stored in Microsoft SQL Server 2008 and I have to restore it. I first installed Microsoft SQL Server 2008 Express and tried to restore it in MS SQL management studio express but it didn't work because there is a size limit. Does anybody know a method how i can restore the file? Its the first time I work with Microsoft SQL and I have no clue what to do. Its really urgend and I would be really helpful for any help! Thanks a lot! Umutos

    Read the article

  • Use of bit-torrent for large file download as an alternative to FTP

    - by questzen
    The company I work for procures large volumes of data and does this by subscribing to FTP locations. I was wondering if it is possible to download the same using a tracker, the major challenge is authentication of the users IMO. Most ftp servers we subscribe to have a restriction of the number of ftp connection attempts. Does any one here have any experience with this? Any advice is welcome. Edit To clarify, we subscribe to third party vendors and access their ftp location using credentials provided by them. The service is not exclusive to us, they do sell their data to several others. If we could be part of the swarm, the download rates would be pretty high without added penalty. The question is about the possibility of achieving this, so that we can put-forth a proposal in those lines. The vendors obviously wouldn't share data to non-subscribers, so that is a constraint.

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >