Search Results

Search found 77950 results on 3118 pages for 'large file upload'.

Page 165/3118 | < Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >

  • How to execute a "name.desktop" file? [duplicate]

    - by Pubudug
    This question already has an answer here: Running a .desktop file in the terminal 10 answers #!/usr/bin/env xdg-open [Desktop Entry] Version=1.0 Type=Link Name=ShareFolder Icon=/usr/share/icons/DPL/NetworkShare.png Name[en_US]=ShareFolder URL=smb://servername/sharefolder This is my .desktop file which has a URL. How do I execute this desktop shortcut in the terminal? If i double click it works perfectly, but I need to execute this in terminal. I tried Running a .desktop file in the terminal. That didn't work for me either but it does if its an "application" shortcut. I'm trying here to execute "link" .desktop file, where you define in the type section (Type=Link) and (URL=smb://servername/sharefolder)

    Read the article

  • VCS for single user using file sync service

    - by StackUnder
    I'm trying to setup a version control for my one man project. My project files are in sync thanks to live mesh (but I could be using dropbox for that matter), between my laptop, my home pc and my office pc. I'm now using Netbeans with local file history. Sometimes it helps to revert to a previous state of one file. But imagine a situation when multiple files have problems. Correct me if I'm wrong but I would have to go to every file and revert to previous "safe" state. I don't like this approach, so I'm considering using a version control between SVN and GIT. I have some previous experience with SVN (TortoiseSVN) and I know that I can create a file:// repo. So, what a want to do is setup a VCS inside my synced folder just to have the ability to "revert" to a previous version if something goes wrong. Since everything's been synced to all computers, I wouldn't ever need to run an update. The file tree organization would be the following: C:...\SyncedFolder\MyProject\ Inside MyProject folder are all the project files plus a directory that has SVN or GIT info of my project (the repo/master). What VCS is best for this situation: SVN or GIT? Does SVN need to store all files from HEAD revision, thus "duplicating" all my project inside my synced folder? Does GIT eliminates this problem? Is this the best approach?

    Read the article

  • Uploadify with ruby on rails 'bad content body' 500 Internal Server Error

    - by Mr_Nizzle
    I'm Getting this error in my development log while uploadify is uploading the file and in the view i get an 'IO ERROR' beside filename. /!\ FAILSAFE /!\ Thu Mar 18 11:54:53 -0500 2010 Status: 500 Internal Server Error bad content body /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/utils.rb:351:in `parse_multipart' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/utils.rb:323:in `loop' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/utils.rb:323:in `parse_multipart' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/request.rb:133:in `POST' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/methodoverride.rb:15:in `call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.3/lib/action_controller/params_parser.rb:15:in `call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.3/lib/action_controller/session/cookie_store.rb:93:in `call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.3/lib/action_controller/reloader.rb:29:in `call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.3/lib/action_controller/failsafe.rb:26:in `call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `synchronize' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.3/lib/action_controller/dispatcher.rb:106:in `call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/content_length.rb:13:in `call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/fastcgi.rb:58:in `serve' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:103:in `process_request' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:153:in `with_signal_handler' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:101:in `process_request' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:78:in `process_each_request' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:77:in `each' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:77:in `process_each_request' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:76:in `catch' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:76:in `process_each_request' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:51:in `process!' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:23:in `process!' dispatch.fcgi:24 any idea on this?

    Read the article

  • webservice CopyIntoItems is not working to upload file to sharepoint

    - by Joeri
    The following piece of C# is always failing with 1 Unknown Object reference not set to an instance of an object Anybody some idea what i am missing? try { //Copy WebService Settings String strUserName = "abc"; String strPassword = "abc"; String strDomain = "SVR03"; String FileName = "Filename.xls"; WebReference.Copy copyService = new WebReference.Copy(); copyService.Url = "http://192.168.11.253/_vti_bin/copy.asmx"; copyService.Credentials = new NetworkCredential (strUserName, strPassword, strDomain); // Filestream of attachment FileStream MyFile = new FileStream(@"C:\temp\28200.xls", FileMode.Open, FileAccess.Read); // Read the attachment in to a variable byte[] Contents = new byte[MyFile.Length]; MyFile.Read(Contents, 0, (int)MyFile.Length); MyFile.Close(); //Change file name if not exist then create new one String[] destinationUrl = { "http://192.168.11.253/Shared Documents/28200.xls" }; // Setup some SharePoint metadata fields WebReference.FieldInformation fieldInfo = new WebReference.FieldInformation(); WebReference.FieldInformation[] ListFields = { fieldInfo }; //Copy the document from Local to SharePoint WebReference.CopyResult[] result; uint NewListId = copyService.CopyIntoItems (FileName, destinationUrl, ListFields, Contents, out result); if (result.Length < 1) Console.WriteLine("Unable to create a document library item"); else { Console.WriteLine( result.Length ); Console.WriteLine( result[0].ErrorCode ); Console.WriteLine( result[0].ErrorMessage ); Console.WriteLine( result[0].DestinationUrl); } } catch (Exception ex) { Console.WriteLine("Exception: {0}", ex.Message); }

    Read the article

  • The remote server returned an unexpected response: (413) Request Entity Too Large

    - by user1583591
    If anyone can help me figure out why I am getting the following error when making a call to my WCF service I would be eternally grateful. The maximum message size quota for incoming messages (65536) has been exceeded. To increase the quota, use the MaxReceivedMessageSize property on the appropriate binding element. I have tried modifying the config file on both the service and client, and made sure the service name includes the namespace. I cannt seem to make any progress. Here is my service config settings: <services> <service name="CCC.CA-CP &amp; Sightlines Campus Carbon Calculator"> <endpoint address="" binding="basicHttpBinding" bindingConfiguration="Binding2" contract="CCC.ICCCService" behaviorConfiguration="WebBehavior2" /> </service> </services> <bindings> <basicHttpBinding> <binding name="Binding2" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="2147483647" maxBufferPoolSize="52428800" maxReceivedMessageSize="2147483647" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="2147483647" maxArrayLength="16384" maxBytesPerRead="20000" maxNameTableCharCount="16384" ></readerQuotas> </binding> </basicHttpBinding> </bindings> .. <dataContractSerializer maxItemsInObjectGraph="12097151" /> ... <requestLimits maxAllowedContentLength="157286400" /> ... <httpRuntime useFullyQualifiedRedirectUrl="true" maxRequestLength="2147483647"... I also set the client config with the same binding values. Here is the service contract : namespace CCC { [ServiceContract(Name = "CA-CP & Sightlines Campus Carbon Calculator", Namespace = "http://www.sightlines.com/CCC/01")] public interface ICCCService { .... } Thanks in advance for any help given!

    Read the article

  • CGContextDrawPDFPage taking up large amounts of memory

    - by Ed Marty
    I have a PDF file that I want to draw in outline form. I want to draw the first several pages on the document each in their own UIImage to use on a button so that when clicked, the main display will navigate to the clicked page. However, CGContextDrawPDFPage seems to be using copious amounts of memory when attempting to draw the page. Even though the image is only supposed to be around 100px tall, the application crashes while drawing one page in particular, which according to Instruments, allocates about 13 MB of memory just for the one page. Here's the code for drawing: //Note: This is always called in a background thread, but the autorelease pool is setup elsewhere + (void) drawPage:(CGPDFPageRef)m_page inRect:(CGRect)rect inContext:(CGContextRef) g { CGPDFBox box = kCGPDFMediaBox; CGAffineTransform t = CGPDFPageGetDrawingTransform(m_page, box, rect, 0,YES); CGRect pageRect = CGPDFPageGetBoxRect(m_page, box); //Start the drawing CGContextSaveGState(g); //Clip to our bounding box CGContextClipToRect(g, pageRect); //Now we have to flip the origin to top-left instead of bottom left //First: flip y-axix CGContextScaleCTM(g, 1, -1); //Second: move origin CGContextTranslateCTM(g, 0, -rect.size.height); //Now apply the transform to draw the page within the rect CGContextConcatCTM(g, t); //Finally, draw the page //The important bit. Commenting out the following line "fixes" the crashing issue. CGContextDrawPDFPage(g, m_page); CGContextRestoreGState(g); } Is there a better way to draw this image that doesn't take up huge amounts of memory?

    Read the article

  • Large Django application layout

    - by Rob Golding
    I am in a team developing a web-based university portal, which will be based on Django. We are still in the exploratory stages, and I am trying to find the best way to lay the project/development environment out. My initial idea is to develop the system as a Django "app", which contains sub-applications to separate out the different parts of the system. The reason I intended to make these "sub" applications is that they would not have any use outside the parent application whatsoever, so there would be little point in distributing them separately. We envisage that the portal will be installed in multiple locations (at different universities, for example) so the main app can be dropped into a number of Django projects to install it. We therefore have a different repository for each location's project, which is really just a settings.py file defining the installed portal applications, and a urls.py routing the urls to it. I have started to write some initial code, though, and I've come up against a problem. Some of the code that handles user authentication and profiles seems to be without a home. It doesn't conceptually belong in the portal application as it doesn't relate to the portal's functionality. It also, however, can't go in the project repository - as I would then be duplicating the code over each location's repository. If I then discovered a bug in this code, for example, I would have to manually replicate the fix over all of the location's project files. My idea for a fix is to make all the project repos a fork of a "master" location project, so that I can pull any changes from that master. I think this is messy though, and it means that I have one more repository to look after. I'm looking for a better way to achieve this project. Can anyone recommend a solution or a similar example I can take a look at? The problem seems to be that I am developing a Django project rather than just a Django application.

    Read the article

  • python / sqlite - database locked despite large timeouts

    - by Chris Phillips
    Hi, I'm sure I'm missing something pretty obvious, but I can't for the life of me stop my pysqlite scripts crashing out with a database is locked error. I have two scripts, one to load data into the database, and one to read data out, but both will frequently, and instantly, crash depending on what the other is doing with the database at any given time. I've got the timeout on both scripts set to 30 seconds: cx = sqlite.connect("database.sql", timeout=30.0) and think I can see some evidence of the timeouts in that i get what appears to be a timing stamp (e.g 0.12343827e-06 0.1 - and how do I stop that being printed?) dumped occasionally in the middle of my curses formatted output screen, but no delay that ever gets remotely near the 30 second timeout, but still one of the other keeps crashing again and again from this. I'm running RHEL5.4 on a 64 bit 4 cpu HS21 IBM blade, and have heard some mention about issues about multi-threading and am not sure if this might be relevant. Packages in use are sqlite-3.3.6-5 and python-sqlite-1.1.7-1.2.1, and upgrading to newer versions outside of RedHat's official provisions is not a great option for me. Possible, but not desirable due to the environment in general. I have had autocommit=1 on previously on both scripts, but have since disabled on both, and am now cx.commit()ing on the inserting script and not committing on the select script. Ultimately as I only ever have one script actually making any modifications, I don't really see why this locking should ever ever happen. I have noticed that this is significantly worse over time when the database has gotten larger. It was recently at 13mb with 3 equal sized tables, which was about 1 day's worth of data. creating a new file has significantly improved this, which seems understandable, but the timeout ultimately just doesn't seem to be being obeyed. Any pointers very much appreciated. Thanks Chris

    Read the article

  • implementing a download manager that supports resuming

    - by Idan K
    hi, I intend on writing a small download manager in C++ that supports resuming (and multiple connections per download). From the info I gathered so far, when sending the http request I need to add a header field with a key of "Range" and the value "bytes=startoff-endoff". Then the server returns a http response with the data between those offsets. So roughly what I have in mind is to split the file to the number of allowed connections per file and send a http request per splitted part with the appropriate "Range". So if I have a 4mb file and 4 allowed connections, I'd split the file to 4 and have 4 http requests going, each with the appropriate "Range" field. Implementing the resume feature would involve remembering which offsets are already downloaded and simply not request those. Is this the right way to do this? What if the web server doesn't support resuming? (my guess is it will ignore the "Range" and just send the entire file) When sending the http requests, should I specify in the range the entire splitted size? Or maybe ask smaller pieces, say 1024k per request? When reading the data, should I write it immediately to the file or do some kind of buffering? I guess it could be wasteful to write small chunks. Should I use a memory mapped file? If I remember correctly, it's recommended for frequent reads rather than writes (I could be wrong). Is it memory wise? What if I have several downloads simultaneously? If I'm not using a memory mapped file, should I open the file per allowed connection? Or when needing to write to the file simply seek? (if I did use a memory mapped file this would be really easy, since I could simply have several pointers). Note: I'll probably be using Qt, but this is a general question so I left code out of it.

    Read the article

  • working with large sprite sheets on iphone

    - by lukya
    Hi All, I am trying to use sprite sheet animation in my application. The first POC with a small sprite sheet worked fine but as i change the sprite sheet to a bigger one, i get "check_safe_call: could not restore current frame" warning and the application quits. A quick search revealed that this problem meant my app is taking too much memory or the image is too huge in dimension. My image is 4.9 Mb and dimensions are 6720 * 10080 (oops!!). i read that iphone allows maximum 3 Mb image with dimensions up to 1024 * 1024. Also that the sprite sheet image dimensions should be a power of two. So please let me know how i can use a sprite sheet this big. One approach could be to cut the sprite sheet into many smaller sprite sheets and use them one at a time. Please suggest if you know any other/better approach to accommodate bigger sprite sheets and whether the problem with my sprite sheet is size (4.9 Mb) OR dimensions (6720 * 10080). (Just FYI, i am not trying to play a movie so using MP4 file instead is not an option for me. i need to animate the sprite sheet based on accelerometer input and i have been able to achieve that in my POC with smaller sprite sheet.) Thanks, Swapnil

    Read the article

  • importing a large txt file in MySQL ?

    - by Taz
    Hi I am loading a text data in MySQL using the following command 'mysql> Load Data local Infile 'C:\\Documents and Settings\\Scan\\My Documents\\D ownloads\\instance_types_en.nt\\Copy of instance_types_en.txt' into table dbpedi aentities.resources fields terminated by ' ' lines terminated by 'rn';' Data is like (actually there is a newline after '.') <a> <b> <c> . <a> <b> <c> . <a> <b> <c> . <a> <b> <c> .<a> <b> <c> . <a> <b> <c> . Table has and auto increment ID field and then text fields for all three values. File size is about 750MB The problems are 1. appears to be in first text field 2. only 2MB data is imported

    Read the article

  • Force Download Files Broken Headers wrong?

    - by Sinan
    if($_POST['mode']=="save") { $root = $_SERVER['DOCUMENT_ROOT']; $path = "/mcboeking/"; $path = $root.$path; $file_path = $_POST['path']; $file = $path.$file_path; if(!file_exists($file)) { die('file not found'); } else { header("Cache-Control: public"); header("Content-Description: File Transfer"); header('Content-Type: application/force-download'); header("Content-Disposition: attachment; filename=\"".basename($file)."\";" ); header("Content-Length: ".filesize($file)); readfile($file);}} As soon as i download the file and open it i get an error message. When i try to open a .doc i get the message : file structure is invalid. And when i try to open a jpg : This file can not be opened. It may be corrupt or a file format that Preview does not recognize. But when i download PDF files, they open without any problem. Can someone help me? P.s. i tried different headers including : header('Content-Type: application/octet-stream');

    Read the article

  • how to split a very large database on sql server

    - by ken jackson
    I have a 90 GB SQL Server database that I want to make more manageable. It stores stock data from 50+ different stocks from 2009 and 2010, and each stock is a separate table. Some tables have hundreds of millions of rows, and other have just a few million. What I want to do is somehow split the database, so that I don't have a single database file that is 90 GB. What I want is to be able to somehow magically split all the tables so that I can backup the 2009 data once and not have to keep on including it in the backup every time I backup the entire database, however, I would like the 2009 data to be included whenever I do a query. Is partitioning the database the way to go? Will it do the above for me, or will I need some other solution? I research partitioning, but I wasn't sure if that would solve all my problems. I wasn't able to find anything that would tell me whether or not it would migrate prexisting data, or whether it only worked for newly inserted data. Any help or pointers would be much appreciated. Thanks in advance, Ken

    Read the article

  • file upload in JSF using myfaces component

    - by prt
    Hi, all i am creating a JSF application where file uploading functionality is required.I have added all the required jar files in my /WEB-INF/lib folder. jsf-api.jar jsf-impl.jar jstl.jar standard.jar myfaces-extensions.jar commons-collections.jar commons-digester.jar commons-beanutils.jar commons-logging.jar commons-fileupload-1.0.jar but still when trying to deploy the application on apache 6.0.29 i am getting the following error. org.apache.catalina.core.StandardContext addApplicationListener INFO: The listener "com.sun.faces.config.ConfigureListener" is already configured for this context. The duplicate definition has been ignored. org.apache.catalina.core.StandardContext start SEVERE: Error listenerStart PM org.apache.catalina.core.StandardContext start SEVERE: Context [/jsfApplication] startup failed due to previous errors org.apache.catalina.loader.WebappClassLoader clearReferencesJdbc The web application [/jsfApplication] registered the JBDC driver [com.mysql.jdbc.Driver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered. org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/jsfApplication] appears to have started a thread named [Timer-0] but has failed to stop it. This is very likely to create a memory leak. org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/jsfApplication] appears to have started a thread named [MySQL Statement Cancellation Timer] but has failed to stop it. This is very likely to create a memory leak. log4j:ERROR LogMananger.repositorySelector was null likely due to error in class reloading, using NOPLoggerRepository. i am using also using hibernate and spring framework for this application. please help. thanks,

    Read the article

  • PHP: imagepng is creating inordinately large files

    - by Rafael
    I'm using a simple thumbnailing script I wrote and it's pretty standard: $imgbuffer = imagecreatetruecolor($thumbwidth, $thumbheight); switch($type) { case 1: $image = imagecreatefromgif($img); break; case 2: $image = imagecreatefromjpeg($img); break; case 3: $image = imagecreatefrompng($img); break; case 6: $image = imagecreatefrombmp($img); break; case 15: $image = imagecreatefromwbmp($img); break; default: return log_error("Tried to create thumbnail from $img: not a valid image"); } imagecopyresampled($imgbuffer, $image, 0, 0, 0, 0, $thumbwidth, $thumbheight, $width, $height); $output = imagepng($imgbuffer, "$album/thumbs/$imgname.png", 9); 9 is the lowest quality setting, yet from a 400 x 600 JPEG image (at 56kB) I'm getting a thumbnail 27 kB in size (140 x 140). Using imagejpeg (quality of 80) instead of imagepng it's about 4kB. How can this be, especially at the lowest quality setting for imagepng? I tried using imagecopy instead of imagecopyresampled, and imagecreate instead of the true color version. Unfortunately the images come out mangled somehow. Is there any way to get PNG thumbnails of a reasonably small file size (about 4 kB at 140 x 140)? Or do I have to use JPEG?

    Read the article

  • .net Drawing.Graphics.FromImage() returns blank black image

    - by joox
    I'm trying to rescale uploaded jpeg in asp.net So I go: Image original = Image.FromStream(myPostedFile.InputStream); int w=original.Width, h=original.Height; using(Graphics g = Graphics.FromImage(original)) { g.ScaleTransform(0.5f, 0.5f); ... // e.g. using (Bitmap done = new Bitmap(w, h, g)) { done.Save( Server.MapPath(saveas), ImageFormat.Jpeg ); //saves blank black, though with correct width and height } } this saves a virgin black jpeg whatever file i give it. Though if i take input image stream immediately into done bitmap, it does recompress and save it fine, like: Image original = Image.FromStream(myPostedFile.InputStream); using (Bitmap done = new Bitmap(original)) { done.Save( Server.MapPath(saveas), ImageFormat.Jpeg ); } Do i have to make some magic with g? upd: i tried: Image original = Image.FromStream(fstream); int w=original.Width, h=original.Height; using(Bitmap b = new Bitmap(original)) //also tried new Bitmap(w,h) using (Graphics g = Graphics.FromImage(b)) { g.DrawImage(original, 0, 0, w, h); //also tried g.DrawImage(b, 0, 0, w, h) using (Bitmap done = new Bitmap(w, h, g)) { done.Save( Server.MapPath(saveas), ImageFormat.Jpeg ); } } same story - pure black of correct dimensions

    Read the article

  • Read/Write Files from the Content Provider

    - by drum
    I want to be able to create a file from the Content Provider, however I get the following error: java.io.Filenotfoundexception: /0: open file failed: erofs (read-only file system) What I am trying to do is create a file whenever an application calls the insert method from my Provider. This is the excerpt of the code that does the file creation: FileWriter fstream = new FileWriter(valueKey); BufferedWriter out = new BufferedWriter(fstream); out.write(valueContent); out.close(); Originally I wanted to use openFileOutput() but the function appears to be undefined. Anyone has a workaround to this problem? EDIT: I found out that I had to specify the directory as well. Here is a more complete snippet of the code: File file = new File("/data/data/Project.Package.Structure/files/"+valueKey); file.createNewFile(); FileWriter fstream = new FileWriter(file); BufferedWriter out = new BufferedWriter(fstream); out.write(valueContent); out.close(); I also enabled the permission <uses-permission android:name="android.permission.WRITE_INTERNAL_STORAGE" /> This time I got an error saying: java.io.IOException: open failed: ENOENT (No such file or directory)

    Read the article

  • firefox does not load large size images

    - by Pradeep
    I am stuck with a kind of bug in FF, wherein it’s unable to load images of big size (I have 8 MB size of image) from the server. The loading of image is all fine on IE. I am still looking out for ways to get rid of this problem. I changed server(IIS) settings to allow bigger file sizes. Also, I used “load” event on image using JQuery and tried all sort of options listed here http://api.jquery.com/load-event/, but nothing worked so far. If anyone of you has come across any such similar problem, and a way to resolve it, it would be nice to hear from you Please note: high resolution images are part of the requirement. Code : <style> img { background-color: #FFFFFF; background-image: url(http://eremurus.hyd:8080/QMS/plugin/imagepanner/loader.gif); background-repeat: no-repeat; background-position: center center; } </style> <script src="../plugin/jquery-ui-1.8.7.custom/js/jquery-1.4.4.min.js" type="text/javascript"></script> <script> jQuery(document).ready(function($){ ///var _url = "http://eremurus.hyd:8080/QMS/plugin/imagepanner/floorPlan.jpg"; // set up the node / element _im =$("#main"); //_im.bind("load",function(){ $(this).fadeIn(); }); // set the src attribute now, after insertion to the DOM //_im.attr('src',_url); $("#main").one("load",function(){ alert('loaded'); }) .each(function(){ if(this.complete){ $(this).trigger("load"); } }); }); </script> </head> <body> <div id="target"><img id='main' src="http://eremurus.hyd:8080/QMS/plugin/imagepanner/floorPlan.jpg"> </img></div> </body> </html>

    Read the article

  • multiple-to-one relationship mysql, submissions

    - by Yulia
    Hello, I have the following problem. Basically I have a form with an option to submit up to 3 images. Right now, after each submission it creates 3 records for album table and 3 records for images. I need it to be one record for album and 3 for images, plus to link images to the album. I hope it all makes sense... Here is my structure. TABLE `albums` ( `id` int(11) NOT NULL auto_increment, `title` varchar(50) NOT NULL, `fullname` varchar(40) NOT NULL, `email` varchar(100) NOT NULL, `created_at` datetime NOT NULL, `theme_id` int(11) NOT NULL, `description` int(11) NOT NULL, `vote_cache` int(11) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=20 ; TABLE `images` ( `id` int(11) NOT NULL auto_increment, `album_id` int(11) NOT NULL, `name` varchar(30) NOT NULL, and my code function create_album($params) { db_connect(); $query = sprintf("INSERT INTO albums set albums.title = '%s', albums.email = '%s', albums.discuss_url = '%s', albums.theme_id = '%s', albums.fullname = '%s', albums.description = '%s', created_at = NOW()", mysql_real_escape_string($params['title']), mysql_real_escape_string($params['email']), mysql_real_escape_string($params['theme_id']), mysql_real_escape_string($params['fullname']), mysql_real_escape_string($params['description']) ); $result = mysql_query($query); if(!$result) { return false; } $album_id = mysql_insert_id(); return $album_id; } if(!is_uploaded_file($_FILES['userfile']['tmp_name'][$i])) { $warning = 'No file uploaded'; } elseif is_valid_file_size($_FILES['userfile']['size'][$i])) { $_POST['album']['theme_id'] = $theme['id']; create_album($_POST['album']); mysql_query("INSERT INTO images(name) VALUES('$newName')"); copy($_FILES['userfile']['tmp_name'][$i], './photos/'.$original_dir.'/' .$newName.'.jpg');

    Read the article

  • Saving image to server with php

    - by salmon
    Hey i have the following script, basicaly a flash file sends it some data and it creates a image and then opens a save as dialog box for the user so that they can save the image to there system.Here the problem comes in, how would i go about also saving the image to my server? <?php $to = $_POST['to']; $from = $_POST['from']; $fname = $_POST['fname']; $send = $_POST['send']; $data = explode(",", $_POST['img']); $width = $_POST['width']; $height = $_POST['height']; $image=imagecreatetruecolor( $width ,$height ); $background = imagecolorallocate( $image ,0 , 0 , 0 ); //Copy pixels $i = 0; for($x=0; $x<=$width; $x++){ for($y=0; $y<=$height; $y++){ $int = hexdec($data[$i++]); $color = ImageColorAllocate ($image, 0xFF & ($int >> 0x10), 0xFF & ($int >> 0x8), 0xFF & $int); imagesetpixel ( $image , $x , $y , $color ); } } //Output image and clean #header("Content-Disposition: attachment; filename=test.jpg" ); header("Content-type: image/jpeg"); ImageJPEG( $image ); imagedestroy( $image ); ?> Help would be greatly appreciated!

    Read the article

  • Dealing with large directories in a checkout

    - by Eric
    I am trying to come up with a version control process for a web app that I work on. Currently, my major stumbling blocks are two directories that are huge (both over 4GB). Only a few people need to work on things within the huge directories; most people don't even need to see what's in them. Our directory structure looks something like: / --file.aspx --anotherFile.aspx --/coolThings ----coolThing.aspx --/bigFolder ----someHugeMovie.mov ----someHugeSound.mp3 --/anotherBigFolder ----... I'm sure you get the picture. It's hard to justify a checkout that has to pull down 8GB of data that's likely useless to a developer. I know, it's only once, but even once could be really frustrating for someone (and will make it harder for me to convince everyone to use source control). (Plus, clean checkouts will be painfully slow.) These folders do have to be available in the web application. What can I do? I've thought about separate repositories for the big folders. That way, you only download if you need it; but then how do I manage checking these out onto our development server? I've also thought about not trying to version control those folders: just update them directly on the web server... but I am not enamored of this idea. Is there some magic way to simply exclude directories from a checkout that I haven't found? (Pretty sure there is not.) Of course, there's always the option to just give up, bite the bullet, and accept downloading 8 useless GB. What say you? Have you encountered this problem before? How did you solve it?

    Read the article

  • SQL Server 05, which is optimal, LIKE %<term>% or CONTAINS() for searching large column

    - by Spud1
    I've got a function written by another developer which I am trying to modify for a slightly different use. It is used by a SP to check if a certain phrase exists in a text document stored in the DB, and returns 1 if the value is found or 0 if its not. This is the query: SELECT @mres=1 from documents where id=@DocumentID and contains(text, @search_term) The document contains mostly XML, and the search_term is a GUID formatted as an nvarchar(40). This seems to run quite slowly to me (taking 5-6 seconds to execute this part of the process), but in the same script file there is also this version of the above, commented out. SELECT @mres=1 from documents where id=@DocumentID and textlike '%' + @search_term + '%' This version runs MUCH quicker, taking 4ms compared to 15ms for the first example. So, my question is why use the first over the second? I assume this developer (who is no longer working with me) had a good reason, but at the moment I am struggling to find it.. Is it possibly something to do with the full text indexing? (this is a dev DB I am working with, so the production version may have better indexing..) I am not that clued up on FTI really so not quite sure at the moment. Thoughts/ideas?

    Read the article

< Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >