Search Results

Search found 7179 results on 288 pages for 'slow logon'.

Page 215/288 | < Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >

  • Toggle alternative

    - by Nutmeg
    In a contactform I am using both the jQuery validation plug-in 1.3 and the following (slide) script: $(document).ready(function() { // Expand Panel $("#open").click(function(){ $("div#contact").slideDown("slow"); }); // Collapse Panel $("#close").click(function(){ $("div#contact").slideUp("bounce"); }); // Switch buttons from "Log In | Register" to "Close Panel" on click $("#toggle a").click(function () { $("#toggle a").toggle(); }); }); Problem is that the validation plug-in 1.3 uses the following segment: addWrapper:function(toToggle){if(this.settings.wrapper)toToggle.push(toToggle.parents(this.settings.wrapper));return toToggle;}, which conflicts with the slide code. What's the smartest way to go about this? Using the toggleClass instead? Thanks for your help!

    Read the article

  • Fastest way to move records from an Oracle database into SQL Server

    - by user347748
    Ok this is the scenario... I have a table in Oracle that acts like a queue... A VB.net program reads the queue and calls a stored proc in SQL Server that processes and then inserts the message into another SQL Server table and then deletes the record from the oracle table. We use a DataReader to read the records from Oracle and then call the stored proc for each of the records. The program seems to be a little slow. The stored procedure itself isn't slow. The SP by itself when called in a loop can process about 2000 records in 20 seconds. But when called from the .Net program, the execution time is about 5 records per second. I have seen that most of the time consumed is in calling the stored procedure and waiting for it to return. Is there a better way of doing this? Here is a snippet of the actual code Function StartDataXfer() As Boolean Dim status As Boolean = False Try SqlConn.Open() OraConn.Open() c.ErrorLog(Now.ToString & "--Going to Get the messages from oracle", 1) If GetMsgsFromOracle() Then c.ErrorLog(Now.ToString & "--Got messages from oracle", 1) If ProcessMessages() Then c.ErrorLog(Now.ToString & "--Finished Processing all messages in the queue", 0) status = True Else c.ErrorLog(Now.ToString & "--Failed to Process all messages in the queue", 0) status = False End If Else status = True End If StartDataXfer = status Catch ex As Exception Finally SqlConn.Close() OraConn.Close() End Try End Function Private Function GetMsgsFromOracle() As Boolean Try OraDataAdapter = New OleDb.OleDbDataAdapter OraDataTable = New System.Data.DataTable OraSelCmd = New OleDb.OleDbCommand GetMsgsFromOracle = False With OraSelCmd .CommandType = CommandType.Text .Connection = OraConn .CommandText = GetMsgSql End With OraDataAdapter.SelectCommand = OraSelCmd OraDataAdapter.Fill(OraDataTable) If OraDataTable.Rows.Count > 0 Then GetMsgsFromOracle = True End If Catch ex As Exception GetMsgsFromOracle = False End Try End Function Private Function ProcessMessages() As Boolean Try ProcessMessages = False PrepareSQLInsert() PrepOraDel() i = 0 Dim Method As Integer Dim OraDataRow As DataRow c.ErrorLog(Now.ToString & "--Going to call message sending procedure", 2) For Each OraDataRow In OraDataTable.Rows With OraDataRow Method = GetMethod(.Item(0)) SQLInsCmd.Parameters("RelLifeTime").Value = c.RelLifetime SQLInsCmd.Parameters("Param1").Value = Nothing SQLInsCmd.Parameters("ID").Value = GenerateTransactionID() ' Nothing SQLInsCmd.Parameters("UID").Value = Nothing SQLInsCmd.Parameters("Param").Value = Nothing SQLInsCmd.Parameters("Credit").Value = 0 SQLInsCmd.ExecuteNonQuery() 'check the return value If SQLInsCmd.Parameters("ReturnValue").Value = 1 And SQLInsCmd.Parameters("OutPutParam").Value = 0 Then 'success 'delete the input record from the source table once it is logged c.ErrorLog(Now.ToString & "--Moved record successfully", 2) OraDataAdapter.DeleteCommand.Parameters("P(0)").Value = OraDataRow.Item(6) OraDataAdapter.DeleteCommand.ExecuteNonQuery() c.ErrorLog(Now.ToString & "--Deleted record successfully", 2) OraDataAdapter.Update(OraDataTable) c.ErrorLog(Now.ToString & "--Committed record successfully", 2) i = i + 1 Else 'failure c.ErrorLog(Now.ToString & "--Failed to exec: " & c.DestIns & "Status: " & SQLInsCmd.Parameters("OutPutParam").Value & " and TrackId: " & SQLInsCmd.Parameters("TrackID").Value.ToString, 0) End If If File.Exists("stop.txt") Then c.ErrorLog(Now.ToString & "--Stop File Found", 1) 'ProcessMessages = True 'Exit Function Exit For End If End With Next OraDataAdapter.Update(OraDataTable) c.ErrorLog(Now.ToString & "--Updated Oracle Table", 1) c.ErrorLog(Now.ToString & "--Moved " & i & " records from Oracle to SQL Table", 1) ProcessMessages = True Catch ex As Exception ProcessMessages = False c.ErrorLog(Now.ToString & "--MoveMsgsToSQL: " & ex.Message, 0) Finally OraDataTable.Clear() OraDataTable.Dispose() OraDataAdapter.Dispose() OraDelCmd.Dispose() OraDelCmd = Nothing OraSelCmd = Nothing OraDataTable = Nothing OraDataAdapter = Nothing End Try End Function Public Function GenerateTransactionID() As Int64 Dim SeqNo As Int64 Dim qry As String Dim SqlTransCmd As New OleDb.OleDbCommand qry = " select seqno from StoreSeqNo" SqlTransCmd.CommandType = CommandType.Text SqlTransCmd.Connection = SqlConn SqlTransCmd.CommandText = qry SeqNo = SqlTransCmd.ExecuteScalar If SeqNo > 2147483647 Then qry = "update StoreSeqNo set seqno=1" SqlTransCmd.CommandText = qry SqlTransCmd.ExecuteNonQuery() GenerateTransactionID = 1 Else qry = "update StoreSeqNo set seqno=" & SeqNo + 1 SqlTransCmd.CommandText = qry SqlTransCmd.ExecuteNonQuery() GenerateTransactionID = SeqNo End If End Function Private Function PrepareSQLInsert() As Boolean 'function to prepare the insert statement for the insert into the SQL stmt using 'the sql procedure SMSProcessAndDispatch Try Dim dr As DataRow SQLInsCmd = New OleDb.OleDbCommand With SQLInsCmd .CommandType = CommandType.StoredProcedure .Connection = SqlConn .CommandText = SQLInsProc .Parameters.Add("ReturnValue", OleDb.OleDbType.Integer) .Parameters("ReturnValue").Direction = ParameterDirection.ReturnValue .Parameters.Add("OutPutParam", OleDb.OleDbType.Integer) .Parameters("OutPutParam").Direction = ParameterDirection.Output .Parameters.Add("TrackID", OleDb.OleDbType.VarChar, 70) .Parameters.Add("RelLifeTime", OleDb.OleDbType.TinyInt) .Parameters("RelLifeTime").Direction = ParameterDirection.Input .Parameters.Add("Param1", OleDb.OleDbType.VarChar, 160) .Parameters("Param1").Direction = ParameterDirection.Input .Parameters.Add("TransID", OleDb.OleDbType.VarChar, 70) .Parameters("TransID").Direction = ParameterDirection.Input .Parameters.Add("UID", OleDb.OleDbType.VarChar, 20) .Parameters("UID").Direction = ParameterDirection.Input .Parameters.Add("Param", OleDb.OleDbType.VarChar, 160) .Parameters("Param").Direction = ParameterDirection.Input .Parameters.Add("CheckCredit", OleDb.OleDbType.Integer) .Parameters("CheckCredit").Direction = ParameterDirection.Input .Prepare() End With Catch ex As Exception c.ErrorLog(Now.ToString & "--PrepareSQLInsert: " & ex.Message) End Try End Function Private Function PrepOraDel() As Boolean OraDelCmd = New OleDb.OleDbCommand Try PrepOraDel = False With OraDelCmd .CommandType = CommandType.Text .Connection = OraConn .CommandText = DelSrcSQL .Parameters.Add("P(0)", OleDb.OleDbType.VarChar, 160) 'RowID .Parameters("P(0)").Direction = ParameterDirection.Input .Prepare() End With OraDataAdapter.DeleteCommand = OraDelCmd PrepOraDel = True Catch ex As Exception PrepOraDel = False End Try End Function WHat i would like to know is, if there is anyway to speed up this program? Any ideas/suggestions would be highly appreciated... Regardss, Chetan

    Read the article

  • Why is my regex so much slower compiled than interpreted ?

    - by miket2e
    I have a large and complex C# regex that runs OK when interpreted, but is a bit slow. I'm trying to speed this up by setting RegexOptions.Compiled, and this seems to take about 30 seconds for the first time and instantly after that. I'm trying to negate this by compiling the regex to an assembly first, so my app can be as fast as possible. My problem is when the compiling delay takes place: Regex myComplexRegex = new Regex(regexText, RegexOptions.Compiled); MatchCollection matches = myComplexRegex.Matches(searchText); foreach (Match match in matches) // <--- when the one-time long delay kicks in { } This is making compiling to an assembly basically useless, as I still get the delay on the first foreach call. What I want is for all the compiling delay to be done in advance when I compile to the assembly, not when the user runs the app. Where am I going wrong ? (The code I'm using to compile to an assembly is similar to http://www.dijksterhuis.org/regular-expressions-advanced/ , if that's relevant ).

    Read the article

  • Java : VolatileImage slower than BufferedImage.

    - by Norswap
    I'm making a game in java and in used BufferedImages to render content to the screen. I had performance issues on low end machines where the game is supposed to run, so I switched to VolatileImage which are normally faster. Except they actually slow the whole thing down. The images are created with GraphicsConfiguration.createCompatibleVolatileImage(...) and are drawn to the screen with Graphics.drawImage(...) (follow link to see which one specifically). They are drawn upon a Canvas using double buffering. Does someone has an idea of what is going wrong here ?

    Read the article

  • Objective C: Using UIImage for Stroking

    - by SeongHo
    I am trying to apply a texture for my brush but i'm really having a hard time figuring how it is done. Here's the image of my output. I used an UIImage that just follows the touch on the screen but when i swipe it faster the result is on the right side "W", and on the left side that's the result when i swipe it slow. i tried using the CGContextMoveToPoint and CGContextAddLineToPoint i don't know how apply the texture. Is it possible to use UIImage for the stroke texture? Here's my code UIImage * brushImageTexture = [UIImage imageNamed:@"brushImage.png"]; [brushImagetexture drawAtPoint:CGPointMake(touchCurrentPosition.x, touchVurrentPosition.y) blendMode:blendMode alpha:1.0f];

    Read the article

  • Which approach to create the data access layer has the highest performance?

    - by pooyakhamooshi
    I have to create a very high performance application. Currently, I am using Entity Framework for my data access layer. My application has to insert some communication data almost every second. I found that Entity Framework is slow; it has about 2 seconds delay to finish the SaveChanges() method. I was thinking I have the following options: 1. Create the data access layer myself using ADO.NET; using stored procedures or ad-hoc queries 2. Use Enterprise Library Data access Layer 3. Use NHibernate 4. Use Repository Factory: http://pooyakhamooshi.blogspot.com/search?q=repository What do you think? which one is quicker for inserting data? Which one is quicker to set up?

    Read the article

  • MySQL: Return grouped fields where the group is not empty, effeciently

    - by Ryan Badour
    In one statement I'm trying to group rows of one table by joining to another table. I want to only get grouped rows where their grouped result is not empty. Ex. Items and Categories SELECT Category.id FROM Item, Category WHERE Category.id = Item.categoryId GROUP BY Category.id HAVING COUNT(Item.id) > 0 The above query gives me the results that I want but this is slow, since it has to count all the rows grouped by Category.id. What's a more effecient way? I was trying to do a Group By LIMIT to only retrieve one row per group. But my attempts failed horribly. Any idea how I can do this? Thanks

    Read the article

  • Actionscript: Switching back into previous function from event handler function

    - by J.Ded.
    I need to return to my original function after capturing an event (downloading something) with another function. The original function needs to return a value, which depends on the downloaded data. So, I'd like to pause original function for the time needed for the download and the eventhandler function to complete it's work, and resume it afterwards. The obvious way is to set a flag value (both the original function and the eventhandler are within the same class) and make the original function check it until the eventhandler function changes the flag. But that would be wasteful, and my AS is slow enough already:) [other parts of the application utilise some heavy graphics]. Is there another way? Like an event that gets captured "in the middle" of the function? Or some other form of flow control?

    Read the article

  • ImageData of an externally loaded Image?

    - by sri
    I load an external image and draw it on the Canvas element like so: var canvas = document.getElementById('canvas1'); var context = canvas.getContext('2d'); var image = new Image(); image.onload = function(evt) { context.drawImage(evt.target, 0, 0); } image.src = "test.jpg"; But I want to get the ImageData. So after calling context.drawImage, I do this: var imagedata = canvas.getImageData(); manipulate(imagedata); // modifies imagedata.data context.putImageData(imagedata, 0, 0); Is that the only way to get the imageData of an externally loaded image? Drawing the image on canvas & then getting the imagedata seems awfully slow. Am I missing something? Thanks!

    Read the article

  • How to pass a file (read from Java) most effectively to a native method?

    - by soc
    Hi, I have approx. 30000 files (1MB each) which I want to put into a native method, which requires just an byte array and the size of it as arguments. I looked through some examples and benchmarks (like http://nadeausoftware.com/articles/2008/02/java_tip_how_read_files_quickly) but all of them do some other fancy things. Basically I don't care about the contents of the file, I don't want to access something in that file or the byte array or do anything else with it. I just want to put a file into a native method which accepts an byte array as fast as possible. At the moment I'm using RandomAccessFile, but that's horribly slow (10MB/s). Is there anything like byte[] readTheWholeFile(File file){ ... } which I could put into native void fancyCMethod(readTheWholeFile(myFile), myFile.length()) What would you suggest?

    Read the article

  • What is the best way to make a game timer in Actionscript 3?

    - by Nuthman
    I have built an online game system that depends on a timer that records how long it took a player to complete a challenge. It needs to be accurate to the millisecond. Their time is stored in a SQL database. The problem is that when I use the Timer class, some players are ending up getting scores in the database of less than a second. (which is impossible, as most challenges would take at least 11 seconds to complete even in a perfect situation.) What I have found is that if a player has too many browser windows open, and/or a slow computer, the flash game slows down actually affecting the timer speed itself. The timer is 'spinning' on screen so you can physically see the numbers slowing down. It is frustrating that I cannot just open a second thread or do something to allow flash to keep accurate time regardless of whatever else is going on in the program. Any ideas?

    Read the article

  • (iphone) maintaining CGContextRef or CGLayerRef is a bad idea?

    - by Eugene
    Hi, I need to work with many images, and I can't hold them as UIImage in memory because they are too big. I also need to change colors of image and merge them on the fly. Creating UIImage from underlying NSData, change color, and combine them when you can't have many images on memory is fairly slow. (as far as I can get) I thought maybe I can store underlying CGLayerRef(for image that will be combined) and CGContextRef(the resulting combined image). I am new to drawing world, and not sure if CGLayerRef or CGContextRef is smaller in memory than UIImage. I recently heard that w*h image takes up w*h*4 bytes in memory. Does CGLayerRef or CGContextRef also take up that much memory? Thank you

    Read the article

  • Help me validate these points regarding Ruby

    - by Bragaadeesh
    Hi, I have started learning Ruby for the past 2,3 weeks and I have come up with some findings on the language. Can someone please validate these points. Implemented in many other high level languages such as C, Java, .Net etc., Is slow for the obvious reason that it cannot beat any of the already known high level languages. Should never be compared with any other high level language. Not suitable for large applications. Completely open source and is in a budding state. Has a framework called Rails which claims that it would be good for Agile development Community out there is getting better day by day and finding help immediately should not be a problem as time goes by. Has significant changes between releases which many developers wont welcome right away. Running time cannot be comprehensively estimated since the language has several underlying implementation in several languages. Books are always outdated by the time when you finish them. Thanks.

    Read the article

  • Choosing randomly all the elements in the the list just once

    - by Dalek
    How is it possible to randomly choose a number from a list with n elements, n time without picking the same element of the list twice. I wrote a code to choose the sequence number of the elements in the list but it is slow: >>>redshift=np.array([0.92,0.17,0.51,1.33,....,0.41,0.82]) >>>redshift.shape (1225,) exclude=[] k=0 ng=1225 while (k < ng): flag1=0 sq=random.randint(0, ng) while (flag1<1): if sq in exclude: flag1=1 sq=random.randint(0, ng) else: print sq exclude.append(sq) flag1=0 z=redshift[sq] k+=1 It doesn't choose all the sequence number of elements in the list.

    Read the article

  • Is it possible to get Semantic (emacs) to visit all files automatically?

    - by RealityMonster
    From what I can tell from the docs, semantic works by slowly building up an idea of what's in your project by analysing each file (and possibly its neighbours) as you visit them. This is too slow. I'd like to just have it visit all the files in my project. Is there an easy way to do this? Having to visit hundreds of files before I can get decent autocomplete working seems crazy. I've also got a etags file generated. Can I leverage that somehow?

    Read the article

  • postgres min function performance

    - by wutzebaer
    hi i need the lowest value for runnerId this query: SELECT "runnerId" FROM betlog WHERE "marketId" = '107416794' ; takes 80ms (1968 result rows) this SELECT min("runnerId") FROM betlog WHERE "marketId" = '107416794' ; takes 1600ms is there a faster way to find the minimum, or should i calc the min in my java programm? "Result (cost=100.88..100.89 rows=1 width=0)" " InitPlan 1 (returns $0)" " -> Limit (cost=0.00..100.88 rows=1 width=9)" " -> Index Scan using runneridindex on betlog (cost=0.00..410066.33 rows=4065 width=9)" " Index Cond: ("runnerId" IS NOT NULL)" " Filter: ("marketId" = 107416794::bigint)" CREATE INDEX marketidindex ON betlog USING btree ("marketId" COLLATE pg_catalog."default"); another idea SELECT "runnerId" FROM betlog WHERE "marketId" = '107416794' ORDER BY "runnerId" LIMIT 1 >1600ms SELECT "runnerId" FROM betlog WHERE "marketId" = '107416794' ORDER BY "runnerId" >>100ms how can a limit slow the query down?

    Read the article

  • What's the most efficient way to manage large datasets with Javascript/jQuery in IE?

    - by RenderIn
    I have a search that returns JSON, which I then transform into a HTML table in Javascript. It repeatedly calls the jQuery.append() method, once for each row. I have a modern machine, and the Firefox response time is acceptable. But in IE 8 it is unbearably slow. I decided to move the transformation from data to HTML into the server-side PHP, changing the return type from JSON to HTML. Now, rather than calling the jQuery.append() time repeatedly, I call the jQuery.html() method once with the entire table. I noticed Firefox got faster, but IE got slower. These results are anecdotal and I have not done any benchmarking, but the IE performance is very disappointing. Is there something I can do to speed up the manipulation of large amounts of data in IE or is it simply a bad idea to process very much data at once with AJAX/Javascript?

    Read the article

  • Ensuring uniqueness on a varchar greater than 255 in MYSQL/InnoDB

    - by Vijay Boyapati
    I have a table which contains HTML entries for news pages. When I initially designed it I used URL as the primary key. I've learned the error of my ways because left-joining is super slow. So I want to redesign the table with an integer (id) primary key, but still keep the rows unique based on the URL. The problem is that I've found URLs longer than 255 characters, and MySQL isn't letting my create a key on the URL. I'm using an InnoDB/UTF8 table. From what I understand it's using multiple bytes per character with a limit of 766 bytes for the key (in InnoDB). I would really love suggestions on an elegant way of keeping the rows unique based on URL, while using an integer primary key. Thanks!

    Read the article

  • Millionth number in the serie 2 3 4 6 9 13 19 28 42 63 ... ?

    - by HH
    It takes about minute to achieve 3000 in my comp but I need to know the millionth number in the serie. The definition is recursive so I cannot see any shortcuts except to calculate everything before the millionth number. How can you fast calculate millionth number in the serie? Serie Def n_{i+1} = \floor{ 3/2 * n_{i} } and n_{0}=2. Interestingly, only one site list the serie according to Goolge: this one. Too slow Bash code #!/bin/bash function serie { n=$( echo "3/2*$n" | bc -l | tr '\n' ' ' | sed -e 's@\\@@g' -e 's@ @@g' ); # bc gives \ at very large numbers, sed-tr for it n=$( echo $n/1 | bc ) #DUMMY FLOOR func } n=2 nth=1 while [ true ]; #$nth -lt 500 ]; do serie $n # n gets new value in the function throught global value echo $nth $n nth=$( echo $nth + 1 | bc ) #n++ done

    Read the article

  • Class library modification / migration

    - by Clint
    I have 3 class libraries. A BBL, a DAL, and a DATA (about 15 datasets). Currently 4 [major] applications utilize the functionality in these DLL's. I'm rewriting one of those applications and I need to (1) Use some of the existing functionality in the libraries (2) Change some of it (3) Add new functionality (4) Add new datasets. I'm back and forth about the best way to do this, while keeping my risks at a minimum. Some thoughts.. 1) Use the existing projects and don't make any modifications, only additions 2) Make new libraries, bring over the code I can use, and make additions as needed 3) Implement partial classes in the existing projects Eventually all 4 applications will use the newest functionality, but it will be a slow migration; so the old code can't be depricated yet. Any thoughts?

    Read the article

  • Trying to change img src if another element is visible with jQuery

    - by R.J.
    I have a div ("panel" class) on my page that toggles open/closed on the click of a paragraph element ("flip" class), which has an image inside of it. Here's the HTML: <div class="panel">Contact info</div> <p class="flip"><img src="images/contactExpand.png" />Expand</p> And the jQuery: $(".flip").click(function(){ $(".panel").slideToggle("slow"); }); Everything works fine so far, but I want the image src to change to "contactCollapse.png" when the panel div is visible. This doesn't seem to do anything (image just stays the same): if ($(".panel").is(":visible") == true) { $(".flip img").attr("src","../images/contactCollapse.png") } else { $(".flip img").attr("src","../images/contactExpand.png") } Am I missing something? Thanks for any help!

    Read the article

  • Am I underestimating MySQL ?

    - by user281434
    Hi I'm about to implement a feature on my website that recommends content to users based on the content they already have in their library (a la Last.fm). A single table holds all the records of the content they have added, so a row might look something like: -------------------- | userid | content | -------------------- | 28 | a | -------------------- When I want to recommend some content for a user, I use a query to get all the user id's that have content a added in their library. Then, out of those user id's, I make another query that finds the next most common content among those users (fx. 'b'), and show that to the user. My problem is when I'm thinking about the big picture here. Say that eventually my site will hold something like 500.000 rows in the table, will this make the MySQL response very slow or am I underestimating MySQL here?

    Read the article

  • We failed trying database per custom installation. Plan to recover?

    - by Fedyashev Nikita
    There is a web application which is in production mode for 3 years or so by now. Historically, because of different reasons there was made a decision to use database-per customer installation. Now we came across the fact that now deployments are very slow. Should we ever consider moving all the databases back to single one to reduce environment complexity? Or is it too risky idea? The problem I see now is that it's very hard to merge these databases with saving referential integrity(primary keys of different database' tables can not be obviously differentiated). Databases are not that much big, so we don't have much benefits of reduced load by having multiple databases.

    Read the article

  • Windows Azure Mobil Services first connection in Android

    - by egente
    my application based windows azure mobil services. Application connecting time is well normally but when login to application first time azure mobile services connection is very slow like 10 second, after connection speed is normally. how can i solve this problem? my codes; private MobileServiceClient mClient; private MobileServiceTable<products> mProductsTable; mClient = new MobileServiceClient( "https://example.azure-mobile.net/", "aaaaaaaaaaaaaaaaaaaaaaaa", this).withFilter(new ProgressFilter());; mProductsTable = mClient.getTable(products.class); mProductsTable.where() .execute(new TableQueryCallback<products>() { public void onCompleted(List<products> result, int count, Exception exception, ServiceFilterResponse response) { if (exception == null) { } else{ Toast.makeText(Product.this, "Ops!!! Error.", 1000).show(); }} });

    Read the article

  • How to speed up saving a UIImagePickerController image from the camera to the filesystem via UIImagePNGRepresentation()?

    - by kazuhito0000
    I'm making an applications that let users take a photo and show them both in thumbnail and photo viewer. I have NSManagedObject class called photo and photo has a method that takes UIImage and converts it to PNG using UIImagePNGRepresentation() and saves it to filesystem. After this operation, resize the image to thumbnail size and save it. The problem here is UIImagePNGRepresentation() and conversion of image size seems to be really slow and I don't know if this is a right way to do it. Tell me if anyone know the best way to accomplish what I want to do. Thank you in advance.

    Read the article

< Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >