Search Results

Search found 18409 results on 737 pages for 'large projects'.

Page 60/737 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • MPMoviePlayerController on large videos causes massive memory spike, and a level 1 memory warning

    - by Shizam
    When viewing images my application hums along nicely with low memory consumption, once I try to watch a video using MPMoviePlayerController memory usage spikes way up, dwarfing the previous memory graph and if I play the video it causes a 'memory warning. Level=1' message. The video files (mp4) aren't even that big, 40MB or so, and it doesn't matter if I play the file streamed from a URL or loaded from a local file, actually the memory spike is even worse if I try to stream it. Here is the code I use to create the player: if (_photo.videoPath != nil) { _movieViewController=[[MPMoviePlayerViewController alloc] initWithContentURL:[NSURL fileURLWithPath:_photo.videoPath]]; } else { _movieViewController=[[MPMoviePlayerViewController alloc] initWithContentURL:[NSURL URLWithString:_photo.videoURL]]; } [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(videoMetaListener:) name:MPMovieDurationAvailableNotification object:_movieViewController.moviePlayer]; _movieViewController.moviePlayer.scalingMode=MPMovieScalingModeAspectFit; _movieViewController.moviePlayer.shouldAutoplay = YES; _movieViewController.moviePlayer.controlStyle = MPMovieControlStyleEmbedded; Anybody else running into issues playing video? Also I checked for leaks, there are none reported.

    Read the article

  • PHP File Upload second file does not upload, first file does without error

    - by Curtis
    So I have a script I have been using and it generally works well with multiple files... When I upload a very large file in a multiple file upload, only the first file is uploaded. I am not seeing an errors as to why. I figure this is related to a timeout setting but can not figure it out - Any ideas? I have foloowing set in my htaccess file php_value post_max_size 1024M php_value upload_max_filesize 1024M php_value memory_limit 600M php_value output_buffering on php_value max_execution_time 259200 php_value max_input_time 259200 php_value session.cookie_lifetime 0 php_value session.gc_maxlifetime 259200 php_value default_socket_timeout 259200

    Read the article

  • The remote server returned an unexpected response: (413) Request Entity Too Large

    - by user1583591
    If anyone can help me figure out why I am getting the following error when making a call to my WCF service I would be eternally grateful. The maximum message size quota for incoming messages (65536) has been exceeded. To increase the quota, use the MaxReceivedMessageSize property on the appropriate binding element. I have tried modifying the config file on both the service and client, and made sure the service name includes the namespace. I cannt seem to make any progress. Here is my service config settings: <services> <service name="CCC.CA-CP &amp; Sightlines Campus Carbon Calculator"> <endpoint address="" binding="basicHttpBinding" bindingConfiguration="Binding2" contract="CCC.ICCCService" behaviorConfiguration="WebBehavior2" /> </service> </services> <bindings> <basicHttpBinding> <binding name="Binding2" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="2147483647" maxBufferPoolSize="52428800" maxReceivedMessageSize="2147483647" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="2147483647" maxArrayLength="16384" maxBytesPerRead="20000" maxNameTableCharCount="16384" ></readerQuotas> </binding> </basicHttpBinding> </bindings> .. <dataContractSerializer maxItemsInObjectGraph="12097151" /> ... <requestLimits maxAllowedContentLength="157286400" /> ... <httpRuntime useFullyQualifiedRedirectUrl="true" maxRequestLength="2147483647"... I also set the client config with the same binding values. Here is the service contract : namespace CCC { [ServiceContract(Name = "CA-CP & Sightlines Campus Carbon Calculator", Namespace = "http://www.sightlines.com/CCC/01")] public interface ICCCService { .... } Thanks in advance for any help given!

    Read the article

  • Modulo in JavaScript - large number

    - by Benedikt R.
    Hi! I try to calculate with JS' modulo function, but don't get the right result (which should be 1). Here is a hardcoded piece of code. var checkSum = 210501700012345678131468; alert(checkSum % 97); Result: 66 Whats the problem here? Regards, Benedikt

    Read the article

  • CGContextDrawPDFPage taking up large amounts of memory

    - by Ed Marty
    I have a PDF file that I want to draw in outline form. I want to draw the first several pages on the document each in their own UIImage to use on a button so that when clicked, the main display will navigate to the clicked page. However, CGContextDrawPDFPage seems to be using copious amounts of memory when attempting to draw the page. Even though the image is only supposed to be around 100px tall, the application crashes while drawing one page in particular, which according to Instruments, allocates about 13 MB of memory just for the one page. Here's the code for drawing: //Note: This is always called in a background thread, but the autorelease pool is setup elsewhere + (void) drawPage:(CGPDFPageRef)m_page inRect:(CGRect)rect inContext:(CGContextRef) g { CGPDFBox box = kCGPDFMediaBox; CGAffineTransform t = CGPDFPageGetDrawingTransform(m_page, box, rect, 0,YES); CGRect pageRect = CGPDFPageGetBoxRect(m_page, box); //Start the drawing CGContextSaveGState(g); //Clip to our bounding box CGContextClipToRect(g, pageRect); //Now we have to flip the origin to top-left instead of bottom left //First: flip y-axix CGContextScaleCTM(g, 1, -1); //Second: move origin CGContextTranslateCTM(g, 0, -rect.size.height); //Now apply the transform to draw the page within the rect CGContextConcatCTM(g, t); //Finally, draw the page //The important bit. Commenting out the following line "fixes" the crashing issue. CGContextDrawPDFPage(g, m_page); CGContextRestoreGState(g); } Is there a better way to draw this image that doesn't take up huge amounts of memory?

    Read the article

  • Good projects to learn OCaml and F#

    - by Yin Zhu
    After learning the basic syntax, reading some non-trivial code is a fast way to learn a language. We can also learn how to design a library/software during reading others' code. I have following lists. A Chess program in OCaml by Tomek Czajka. Hal Daumé has written several machine learning libraries in Ocaml. Including decision trees, logistic regression and SVM. All of them are near-production-quality code. A Chess Game Analysis program in F# in Microsoft Research. The above three are my favorites. Will you suggest some other sources? General purpose open source software are good, specialized open source like the three I list here are even more welcome.

    Read the article

  • Horizontal Scrolling Flash Game/Large Horizontal Scene

    - by Nathan
    Hello, I'm currently learning Flash (CS4, AS3) and am creating a game. I have currently 1 flv file with 4 scenes, I then move from left to right and then to scene 2 and go from left to right. This is the game where items pop up that need to be clicked on and you get points. Is there any way I can combine these onto 1 scene? Flash only allows you to have a maximum of 2880px wide. The reason for this is the transition between the scenes is RUBBISH and that my AS is not working correctly in between scenes (it loses values). Any help would be greatly appreciated! Nathan

    Read the article

  • How to create makefile for Lazarus projects?

    - by Gustavo Carreno
    After doing a light search on the Lazarus site I've come to the conclusion that this question has been asked some times but I haven't found an answer, so I'll ask my SO peers. Is there a a way to create a Makefile to replicate the action of the Lazarus IDE when it compiles a project. If so I really don't mind if it's makefile.fpc or just plain makefile, I just want some pointers on how to get to it. BTW, I've tried the option to enable the Makefile on the Lazarus options. Doesn't work.

    Read the article

  • Importing a large dataset into a database

    - by peaceful
    I'm a beginning programmer in the relevant areas to this question, so if possible, it'd be helpful to avoid assuming I know a lot already. I'm trying to import the OpenLibrary dataset into a local Postgres database. After it's imported, I plan to use it as a starting seed for a Ruby on Rails application that will include information on books. The OpenLibrary datasets are available here, in a modified JSON format: http://openlibrary.org/dev/docs/jsondump I only need very basic information for my application, much less than what is provided in the dumps. I'm only trying to get out book titles, author names, and relationships between books and authors. Below are two typical entries from their dataset, the first for an author, and the second for a book (they seem to have an entry for each edition of a book). The entries seem to lead off with a primary key, and then with a type, before including the actual JSON database dump. /a/OL2A /type/author {"name": "U. Venkatakrishna Rao", "personal_name": "U. Venkatakrishna Rao", "last_modified": {"type": "/type/datetime", "value": "2008-09-10 08:44:01.978456"}, "key": "/a/OL2A", "birth_date": "1904", "type": {"key": "/type/author"}, "id": 99, "revision": 3} /b/OL345M /type/edition {"publishers": ["Social Science Research Project, Dept. of Geography, University of Dacca"], "pagination": "ii, 54 p.", "title": "Land use in Fayadabad area", "lccn": ["sa 65000491"], "subject_place": ["East Pakistan", "Dacca region."], "number_of_pages": 54, "languages": [{"comment": "initial import", "code": "eng", "name": "English", "key": "/l/eng"}], "lc_classifications": ["S471.P162 E23"], "publish_date": "1963", "publish_country": "pk ", "key": "/b/OL345M", "authors": [{"birth_date": "1911", "name": "Nafis Ahmad", "key": "/a/OL302A", "personal_name": "Nafis Ahmad"}], "publish_places": ["Dacca, East Pakistan"], "by_statement": "[by] Nafis Ahmad and F. Karim Khan.", "oclc_numbers": ["4671066"], "contributions": ["Khan, Fazle Karim, joint author."], "subjects": ["Land use -- East Pakistan -- Dacca region."]} The size of the uncompressed dumps are enormous, about 2GB for the authors list, and 18GB for the book editions list. OpenLibrary does not provide any tools for this themselves, they provide a simple unoptimized Python script for reading in sample data (which unlike the actual dumps comes in pure JSON format), but they estimate if that was modified for use on their actual data it would take 2 months (!) to finish loading the data. How can I read this into the database? I assume I'll need to write a program to do this. What language and any guidance on how I should do it to finish in a reasonable amount of time? The only scripting language I have any experience with is Ruby.

    Read the article

  • Visualizing Undirected Graph That's Too Large for GraphViz?

    - by Gabe
    Hi Everyone, I was wondering if anyone has any advice for rendering an undirected graph with 178,000 nodes and 500,000 edges. I've tried Neato, Tulip, and Cytoscape. Neato doesn't even come remotely close, and Tulip and Cytoscape claim they can handle it but don't seem to be able to. (Tulip does nothing and Cytoscape claims to be working, and then just stops.) Does anyone have any ideas? I'd just like a vector format file (ps or pdf) with a remotely reasonable layout of the nodes. Thanks!

    Read the article

  • 2D colliding n-body simulation (fast Collision Detection for large number of balls)

    - by osgx
    Hello I want to write a program for simulating a motion of high number (N = 1000 - 10^5 and more) of bodies (circles) on 2D plane. All bodies have equal size and the only force between them is elastic collision. I want to get something like but in larger scale, with more balls and more dense filling of the plane (not a gas model as here, but smth like boiling water model). So I want a fast method of detection that ball number i does have any other ball on its path within 2*radius+V*delta_t distance. I don't want to do a full search of collision with N balls for each of i ball. (This search will be N^2.) PS Sorry for loop-animated GIF. Just press Esc to stop it. (Will not work in Chrome).

    Read the article

  • GTK#-related error on MonoDevelop 2.8.5 on Ubuntu 11.04

    - by Mehrdad
    When I try to create a new solution in MonoDevelop 2.8.5 in Ubuntu 11.04 x64, it shows me: System.ArgumentNullException: Argument cannot be null. Parameter name: path1 at System.IO.Path.Combine (System.String path1, System.String path2) [0x00000] in <filename unknown>:0 at MonoDevelop.Core.FilePath.Combine (System.String[] paths) [0x00000] in <filename unknown>:0 at MonoDevelop.Projects.ProjectCreateInformation.get_BinPath () [0x00000] in <filename unknown>:0 at MonoDevelop.Projects.DotNetProject..ctor (System.String languageName, MonoDevelop.Projects.ProjectCreateInformation projectCreateInfo, System.Xml.XmlElement projectOptions) [0x00000] in <filename unknown>:0 at MonoDevelop.Projects.DotNetAssemblyProject..ctor (System.String languageName, MonoDevelop.Projects.ProjectCreateInformation projectCreateInfo, System.Xml.XmlElement projectOptions) [0x00000] in <filename unknown>:0 at MonoDevelop.Projects.DotNetProjectBinding.CreateProject (System.String languageName, MonoDevelop.Projects.ProjectCreateInformation info, System.Xml.XmlElement projectOptions) [0x00000] in <filename unknown>:0 at MonoDevelop.Projects.DotNetProjectBinding.CreateProject (MonoDevelop.Projects.ProjectCreateInformation info, System.Xml.XmlElement projectOptions) [0x00000] in <filename unknown>:0 at MonoDevelop.Projects.ProjectService.CreateProject (System.String type, MonoDevelop.Projects.ProjectCreateInformation info, System.Xml.XmlElement projectOptions) [0x00000] in <filename unknown>:0 at MonoDevelop.Ide.Templates.ProjectDescriptor.CreateItem (MonoDevelop.Projects.ProjectCreateInformation projectCreateInformation, System.String defaultLanguage) [0x00000] in <filename unknown>:0 at MonoDevelop.Ide.Templates.ProjectTemplate.HasItemFeatures (MonoDevelop.Projects.SolutionFolder parentFolder, MonoDevelop.Projects.ProjectCreateInformation cinfo) [0x00000] in <filename unknown>:0 at MonoDevelop.Ide.Projects.NewProjectDialog.SelectedIndexChange (System.Object sender, System.EventArgs e) [0x00000] in <filename unknown>:0 I strace'd it and saw repeated failed accesses to files like: /usr/lib/mono/gac/gtk-sharp/2.12.0.0__35e10195dab3c99f/libgtk-x11-2.0.so.0.la so I'm assuming that's the cause of the problem. However, I've installed (and re-installed) anything GTK#-related that I could think of... and the error still occurs. Does anyone know how to fix it?

    Read the article

  • Large file uploads from web pages

    - by jerrygarciuh
    Hi folks, I code primarily in PHP and Perl. I have a client who is insisting on seeking video submissions (any encoding) from the public via one of their pages rather than letting YouTube do its job. Server in question is a virtual machine and I can adjust ini settings for max post, max upload size etc as needed. My initial thought is to use a Flash based uploader with PHP on the back end but I wondered if someone might have useful advice and experience on the subject? Peace JG

    Read the article

  • Slow select when inserting large amounts of data (MYSQL)

    - by siannopollo
    I have a process that imports a lot of data (950k rows) using inserts that insert 500 rows at a time. The process generally takes about 12 hours, which isn't too bad. Normally doing a query on the table is pretty quick (under 1 second) as I've put (what I think to be) the proper indexes in place. The problem I'm having is trying to run a query when the import process is running. It is making the query take almost 2 minutes! What can I do to make these two things not compete for resources (or whatever)? I've looked into "insert delayed" but not sure I want to change the table to MyISAM. Thanks for any help!

    Read the article

  • Working with a large data object between ruby processes

    - by Gdeglin
    I have a Ruby hash that reaches approximately 10 megabytes if written to a file using Marshal.dump. After gzip compression it is approximately 500 kilobytes. Iterating through and altering this hash is very fast in ruby (fractions of a millisecond). Even copying it is extremely fast. The problem is that I need to share the data in this hash between Ruby on Rails processes. In order to do this using the Rails cache (file_store or memcached) I need to Marshal.dump the file first, however this incurs a 1000 millisecond delay when serializing the file and a 400 millisecond delay when serializing it. Ideally I would want to be able to save and load this hash from each process in under 100 milliseconds. One idea is to spawn a new Ruby process to hold this hash that provides an API to the other processes to modify or process the data within it, but I want to avoid doing this unless I'm certain that there are no other ways to share this object quickly. Is there a way I can more directly share this hash between processes without needing to serialize or deserialize it? Here is the code I'm using to generate a hash similar to the one I'm working with: @a = [] 0.upto(500) do |r| @a[r] = [] 0.upto(10_000) do |c| if rand(10) == 0 @a[r][c] = 1 # 10% chance of being 1 else @a[r][c] = 0 end end end @c = Marshal.dump(@a) # 1000 milliseconds Marshal.load(@c) # 400 milliseconds Update: Since my original question did not receive many responses, I'm assuming there's no solution as easy as I would have hoped. Presently I'm considering two options: Create a Sinatra application to store this hash with an API to modify/access it. Create a C application to do the same as #1, but a lot faster. The scope of my problem has increased such that the hash may be larger than my original example. So #2 may be necessary. But I have no idea where to start in terms of writing a C application that exposes an appropriate API. A good walkthrough through how best to implement #1 or #2 may receive best answer credit.

    Read the article

  • very large image manipulation and tiling

    - by Mohammad
    I need to a software , Program(Java),or a method for tiling very larg images (more than 140MB). I have used imagemagic and convert tools photoshop and corel draw and matlab (in win os) but I have problem with memory amount.and memory is not enough.imagemagic is very slow and result is not desirable. I dont know how can i only load a small part of image on hard disk to RAM .(with out load whole image from hard)

    Read the article

  • Web Deployment Projects for VS2010 on build server failing with Error MSB4086

    - by SteveBering
    When I upgraded my Web Deployment Project from VS2008 to the VS2010 beta version, I was able to execute the build locally on my development box. However, when I tried to execute the build on our TeamCity build server, I began getting the following exception: C:\Program Files\MSBuild\Microsoft\WebDeployment\v10.0\Microsoft.WebDeployment.targets(162, 37): error MSB4086: A numeric comparison was attempted on "$(_SourceWebProjectPath.Length)" that evaluates to "" instead of a number, in condition "'$(_SourceWebProjectPath)' != '' And $(_SourceWebProjectPath.Length) >= 4)". I did install the Web Deployment Project addin on my build server and I did copy over the C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications directory on my development box to the C:\Program Files\MSBuild\Microsoft\VisualStudio\v10.0\ directory on the build server. Note: My dev box is 64bit and the build server 32bit. I can't figure out why this is behaving differently on the build server than on my dev machine. Anyone have any ideas? Thanks, Steve

    Read the article

  • Natural language processing - Ideas for beginner's projects

    - by Microkernel
    Hi guys, I am a beginner in NLP and NLTK. I am very interested in NLP and hence joined a weekend course on AI in some local institution, which requires me to do a project for completion of the course, and I decided to do it in NLP. The problem is,the instructor is not good at all for this course (According to me she is just a charlatan) (or may not be very interested in teaching as this is her last batch here after which the institute is going to send her out). So I am stuck in a situation where where I got to finish this project in a month to one and half months period, but as a naive person in the field I am feeling it very difficult to comprehend the things required to decide on project. (Also, as I am working full time, I am not finding enough time to dedicate on this). I considered using NLTK toolkit in python for the project for following reasons. (1) Python is famous for ease of use, rapid prototyping and very active community (considering very short span of time I have, and as I am a C programmer by profession, I need a language that I can learn fast and is simple to use). (2) NLTk has good review, and extensive documentation and a very active community. So the problem is what project should I take up, so that I can learn something and will be able to finish project in time. (I know almost nothing in NLP, don't even know what exactly corpora is... :( ) So, please suggest me some topics that I should consider for the project. Regards, MicroKernel :)

    Read the article

  • Storing a large list in isolatedStorage on WP7

    - by Ra
    I'm storing a List with around 3,000 objects in Isolatedstorage using Xml serialize. It takes too long to deserialize this and I was wondering if you have any recommendations to speed it up. The time is tolerable to deserialize up to 500 objects, but takes forever to deserialize 3000. Does it take longer just on the emulator and will be faster on the phone? I did a whole bunch of searching and some article said to use a binary stream reader, but I can't find it. Whether I store in binary or xml doesn't matter, I just want to persist the List. I don't want to look at asynchronous loading just yet...

    Read the article

  • Large scale storage for incrementally-appended documents?

    - by Ben Dilts
    I need to store hundreds of thousands (right now, potentially many millions) of documents that start out empty and are appended to frequently, but never updated otherwise or deleted. These documents are not interrelated in any way, and just need to be accessed by some unique ID. Read accesses are some subset of the document, which almost always starts midway through at some indexed location (e.g. "document #4324319, save #53 to the end"). These documents start very small, at several KB. They typically reach a final size around 500KB, but many reach 10MB or more. I'm currently using MySQL (InnoDB) to store these documents. Each of the incremental saves is just dumped into one big table with the document ID it belongs to, so reading part of a document looks like "select * from saves where document_id=14 and save_id 53 order by save_id", then manually concatenating it all together in code. Ideally, I'd like the storage solution to be easily horizontally scalable, with redundancy across servers (e.g. each document stored on at least 3 nodes) with easy recovery of crashed servers. I've looked at CouchDB and MongoDB as possible replacements for MySQL, but I'm not sure that either of them make a whole lot of sense for this particular application, though I'm open to being convinced. Any input on a good storage solution?

    Read the article

  • How large should my recv buffer be when calling recv in the socket library

    - by Silmaril89
    Hi, I have a few questions about the socket library in C. Here is a snippet of code I'll refer to in my questions. char recv_buffer[3000]; recv(socket, recv_buffer, 3000, 0); First, How do I decide how big to make recv_buffer? I'm using 3000, but it's arbitrary. Second, what happens if recv() receives a packet bigger than my recv_buffer? Third, how can I know if I have received the entire message without calling recv again and have it wait forever when there is nothing to be received? And finally, is there a way I can make a buffer not have a fixed amount of space, so that I can keep adding to it without fear of running out of space? maybe using strcat to concatenate the latest recv() response to the buffer? I know it's a lot of questions in one, but I would greatly appreciate any responses.

    Read the article

  • Visual studio 2003 web projects.

    - by swapna
    Hi, I have a requirement to work on a VS2003 web project. I have VS2008,vs2010,vs2003 installed in my system Other System details are Windows Xp professional version 2 service pack 3. IIS 5.1 When i am trying to create a VS 2003 web project giving the localhost as path i am getting the following error. "visual studio noted that specified web server is not running under asp .net 1.1 version.You will be unable to run asp .net web applications or services". I have used aspnet_regiis commands as well as a tool(ASPNETVersionSwitcher.exe ) to swith versions and in iis also default web site asp.net version chosen as asp .net 1.14322. Still i am getting the error. same error i get ,if i point a virtual directory in the existing 1.1 .net web application and trying to open it. Please advise since i have to work on this project as soon as possible. Thanks SNA

    Read the article

  • Encrypted AES key too large to Decrypt with RSA (Java)

    - by Petey B
    Hello, I am trying to make a program that Encrypts data using AES, then encrypts the AES key with RSA, and then decrypt. However, once i encrypt the AES key it comes out to 128 bytes. RSA will only allow me to decrypt 117 bytes or less, so when i go to decrypt the AES key it throws an error. Relavent code: KeyPairGenerator kpg = KeyPairGenerator.getInstance("RSA"); kpg.initialize(1024); KeyPair kpa = kpg.genKeyPair(); pubKey = kpa.getPublic(); privKey = kpa.getPrivate(); updateText("Private Key: " +privKey +"\n\nPublic Key: " +pubKey); updateText("Encrypting " +infile); //Genereate aes key KeyGenerator kgen = KeyGenerator.getInstance("AES"); kgen.init(128); // 192/256 SecretKey aeskey = kgen.generateKey(); byte[] raw = aeskey.getEncoded(); SecretKeySpec skeySpec = new SecretKeySpec(raw, "AES"); updateText("Encrypting data with AES"); //encrypt data with AES key Cipher aesCipher = Cipher.getInstance("AES"); aesCipher.init(Cipher.ENCRYPT_MODE, skeySpec); SealedObject aesEncryptedData = new SealedObject(infile, aesCipher); updateText("Encrypting AES key with RSA"); //encrypt AES key with RSA Cipher cipher = Cipher.getInstance("RSA"); cipher.init(Cipher.ENCRYPT_MODE, pubKey); byte[] encryptedAesKey = cipher.doFinal(raw); updateText("Decrypting AES key with RSA. Encrypted AES key length: " +encryptedAesKey.length); //decrypt AES key with RSA Cipher decipher = Cipher.getInstance("RSA"); decipher.init(Cipher.DECRYPT_MODE, privKey); byte[] decryptedRaw = cipher.doFinal(encryptedAesKey); //error thrown here because encryptedAesKey is 128 bytes SecretKeySpec decryptedSecKey = new SecretKeySpec(decryptedRaw, "AES"); updateText("Decrypting data with AES"); //decrypt data with AES key Cipher decipherAES = Cipher.getInstance("AES"); decipherAES.init(Cipher.DECRYPT_MODE, decryptedSecKey); String decryptedText = (String) aesEncryptedData.getObject(decipherAES); updateText("Decrypted Text: " +decryptedText); Any idea on how to get around this?

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >