Search Results

Search found 16892 results on 676 pages for 'linear search'.

Page 516/676 | < Previous Page | 512 513 514 515 516 517 518 519 520 521 522 523  | Next Page >

  • how to debug ExceptionInInitializationError?

    - by grmn.bob
    I am getting an exception in a very simple 'study' application, so I expect the problem to be in my project setup, but I don't know how to debug ... What is the context of the exception, "ExceptionInInitializationError"? Where is it documented? A: Search Android Developers Guide Stack trace from within Eclipse Debugger with: select thread - right-click - copy stack Thread [<3> main] (Suspended (exception ExceptionInInitializerError)) Class.newInstance() line: 1479 Instrumentation.newActivity(ClassLoader, String, Intent) line: 1021 ActivityThread.performLaunchActivity(ActivityThread$ActivityRecord, Intent) line: 2367 ActivityThread.handleLaunchActivity(ActivityThread$ActivityRecord, Intent) line: 2470 ActivityThread.access$2200(ActivityThread, ActivityThread$ActivityRecord, Intent) line: 119 ActivityThread$H.handleMessage(Message) line: 1821 ActivityThread$H(Handler).dispatchMessage(Message) line: 99 Looper.loop() line: 123 ActivityThread.main(String[]) line: 4310 Method.invokeNative(Object, Object[], Class, Class[], Class, int, boolean) line: not available [native method] Method.invoke(Object, Object...) line: 521 ZygoteInit$MethodAndArgsCaller.run() line: 860 ZygoteInit.main(String[]) line: 618 NativeStart.main(String[]) line: not available [native method]

    Read the article

  • Clear default values using onsubmit

    - by Thomas
    I need to clear the default values from input fields using js, but all of my attempts so far have failed to target and clear the fields. I was hoping to use onSubmit to excute a function to clear all default values (if the user has not changed them) before the form is submitted. <form method='get' class='custom_search widget custom_search_custom_fields__search' onSubmit='clearDefaults' action='http://www.example.com' > <input name='cs-Price-2' id='cs-Price-2' class='short_form' value='Min. Price' /> <input name='cs-Price-3' id='cs-Price-3' class='short_form' value='Max Price' /> <input type='submit' name='search' class='formbutton' value=''/> </form> How would you accomplish this?

    Read the article

  • Server Side Javascript

    - by XGreen
    Hi all, I can't help to see in many sites I visit the enthusiasm about server side javascript and the appealing look of a single language governing all tiers of the site. Mozilla Rhino, Aptana Jaxer and various John Resig's articles are some of the highlights of my search. I wanted to ask for some input from you guys on SO. your opinions and preferably experience in this. I do most of the data access and business logic currently either with asp.net or php depending on the hosting package of the client. Is anyone among you who's gave up these for ssjs?

    Read the article

  • php regex expression to get title

    - by 55skidoo
    I'm trying to strip content titles out of the middle of text strings. Could I use regex to strip everything out of this string except for the title (in italics) in these strings? Or is there a better way? Joe User wrote a blog post called The 10 Best Regex Expressions in the category Regex. Jane User wrote a blog post called Regex is Hard! in the category TechProblems. I've tried to come up with a regex expression to cover this, but I think it might need two. The trick is that the text in bold is always the same, so you could search for that, like this: regex: delete everything before and including wrote a blog post called regex: delete in the category and everything after it.

    Read the article

  • What non-programming books should programmers read?

    - by Charles Roper
    This is a poll asking the Stackoverflow community what non-programming books they would recommend to fellow programmers. Please read the following before posting: Please post only ONE BOOK PER ANSWER. Please search for your recommendation on this page before posting (there are over NINE PAGES so it is advisable to check them all). Many books have already been suggested and we want to avoid duplicates. If you find your recommendation is already present, vote it up or add some commentary. Please elaborate on why you think a given book is worth reading from a programmer's perspective. This poll is now community editable, so you can edit this question or any of the answers. Note: this article is similar and contains other useful suggestions.

    Read the article

  • ColdFusion code parser?

    - by Kip
    I'm trying to create an app to search my company's ColdFusion codebase. I'd like to be able to do intelligent searches, for example: find where a function is defined (and not hit everywhere the function is called). In order to do this, I'd need to parse the ColdFusion code to identify things like function declarations, function calls, database queries, etc. I've looked into using lex and yacc, but I've never used them before and the learning curve seems very steep. I'm hoping there is something already out there that I could use. My other option is a mess of difficult-to-maintain regex-spaghetti code, which I want to avoid.

    Read the article

  • joining tables while keeping the Null values

    - by Tam
    I have two tables: Users: ID, first_name, last_name Networks: user_id, friend_id, status I want to select all values from the users table but I want to display the status of specific user (say with id=2) while keeping the other ones as NULL. For instance: If I have users: 1 John Smith 2 Tom Summers 3 Amy Wilson And in networks: user_id friend_id status 2 1 friends I want to do search for John Smith for all other users so I want to get: id first_name last_name status 2 Tom Summers friends 3 Amy Wilson NULL I tried doing LEFT JOIN and then WHERE statement but it didn't work because it excluded the rows that have relations with other users but not this user. I can do this using UNION statement but I was wondering if it's at all possible to do it without UNION.

    Read the article

  • Mysql results sorted by list which is unique for each user

    - by ADAM
    Ive got a table of thousands of products and 50 or so authenticated users. These users all show the products on their own web sites and they all require the ability to have them ordered differently. Im guesing i need some kind of seperate table for the orders which contains the product_id, user_id and order column? How do i do this the most efficiently in mysql so as to be very fast, and not slow down if i get millions of products in the database. Is it even wise to do it in mysql or should i be using some kind of other index like solr/lucene? My Product table is called "products" My User table is called "users" A good example of the functionality i need is google search where you can order/supress the results if you are logged in. edit: the product results will be paginated and the users have the authority to edit the products, so its not just ready only

    Read the article

  • WPF: How to dynamically find a usercontrol or datatemplate

    - by Elger
    I have a bunch of datatemplates I use to display various sql-views in an ItemsControl. I don't know which datatemplate i'm going to use until run-time. (every view has different columns) Next to that, I made a generic dynamic datatemplate for all those views that don't need anything special. When I display the view I want to first look in all the available datatemplates if there is one that matches, else use the default dynamic datatemplate. My question is how can I 'search' a datatemplate by name in code? Usercontrol is also possible. Thanks, Elger

    Read the article

  • Python/numpy tricky slicing problem

    - by daver
    Hi stack overflow, I have a problem with some numpy stuff. I need a numpy array to behave in an unusual manner by returning a slice as a view of the data I have sliced, not a copy. So heres an example of what I want to do: Say we have a simple array like this: a = array([1, 0, 0, 0]) I would like to update consecutive entries in the array (moving left to right) with the previous entry from the array, using syntax like this: a[1:] = a[0:3] This would get the following result: a = array([1, 1, 1, 1]) Or something like this: a[1:] = 2*a[:3] # a = [1,2,4,8] To illustrate further I want the following kind of behaviour: for i in range(len(a)): if i == 0 or i+1 == len(a): continue a[i+1] = a[i] Except I want the speed of numpy. The default behavior of numpy is to take a copy of the slice, so what I actually get is this: a = array([1, 1, 0, 0]) I already have this array as a subclass of the ndarray, so I can make further changes to it if need be, I just need the slice on the right hand side to be continually updated as it updates the slice on the left hand side. Am I dreaming or is this magic possible? Update: This is all because I am trying to use Gauss-Seidel iteration to solve a linear algebra problem, more or less. It is a special case involving harmonic functions, I was trying to avoid going into this because its really not necessary and likely to confuse things further, but here goes. The algorithm is this: while not converged: for i in range(len(u[:,0])): for j in range(len(u[0,:])): # skip over boundary entries, i,j == 0 or len(u) u[i,j] = 0.25*(u[i-1,j] + u[i+1,j] + u[i, j-1] + u[i,j+1]) Right? But you can do this two ways, Jacobi involves updating each element with its neighbours without considering updates you have already made until the while loop cycles, to do it in loops you would copy the array then update one array from the copied array. However Gauss-Seidel uses information you have already updated for each of the i-1 and j-1 entries, thus no need for a copy, the loop should essentially 'know' since the array has been re-evaluated after each single element update. That is to say, every time we call up an entry like u[i-1,j] or u[i,j-1] the information calculated in the previous loop will be there. I want to replace this slow and ugly nested loop situation with one nice clean line of code using numpy slicing: u[1:-1,1:-1] = 0.25(u[:-2,1:-1] + u[2:,1:-1] + u[1:-1,:-2] + u[1:-1,2:]) But the result is Jacobi iteration because when you take a slice: u[:,-2,1:-1] you copy the data, thus the slice is not aware of any updates made. Now numpy still loops right? Its not parallel its just a faster way to loop that looks like a parallel operation in python. I want to exploit this behaviour by sort of hacking numpy to return a pointer instead of a copy when I take a slice. Right? Then every time numpy loops, that slice will 'update' or really just replicate whatever happened in the update. To do this I need slices on both sides of the array to be pointers. Anyway if there is some really really clever person out there that awesome, but I've pretty much resigned myself to believing the only answer is to loop in C.

    Read the article

  • Determine fonts used in postscript (.ps) file

    - by Bernard Vander Beken
    Given a postscript file that has the following header %!PS-Adobe-3.0 I would like to list all fonts used in the file. The output does not have to be perfect, but I need to make sure I get all references to any font being used. I am aware there are different types of fonts, and that a font may or may not be embedded in the postscript file. My current best idea is to grep/search for the word Font case insensitively and go from there. Will this get me all the font references? Any better way to achieve this? I tend to use .NET/C# for development purposes, but any solution is appreciated. Thanks, Bernard

    Read the article

  • Timed email reminder in python

    - by Ali
    I have written up a python script that allows a user to input a message, his email and the time and they would like the email sent. This is all stored in a mysql database. However, how do I get the script to execute on the said time and date? will it require a cron job? I mean say at 2:15 on april 20th, the script will search the database for all times of 2:15, and send out those emails. But what about for emails at 2:16? I am using a shared hosting provided, so cant have a continously running script. Thanks

    Read the article

  • Lucene.NET and searching on multiple fields with specific values...

    - by Kieron
    Hi, I've created an index with various bits of data for each document I've added, each document can differ in it field name. Later on, when I come to search the index I need to query it with exact field/ values - for example: FieldName1 = X AND FieldName2 = Y AND FieldName3 = Z What's the best way of constructing the following using Lucene .NET: What analyser is best to use for this exact match type? Upon retrieving a match, I only need one specific field to be returned (which I add to each document) - should this be the only one stored? Later on I'll need to support keyword searching (so a field can have a list of values and I'll need to do a partial match). The fields and values come from a Dictionary<string, string>. It's not user input, it's constructed from code. Thanks, Kieron

    Read the article

  • App engine datastore - query on Enum fields.

    - by Gopi
    I am using GAE(Java) with JDO for persistence. I have an entity with a Enum field which is marked as @Persistent and gets saved correctly into the datastore (As observed from the Datastore viewer in Development Console). But when I query these entities putting a filter based on the Enum value, it is always returning me all the entities whatever value I specify for the enum field. I know GAE java supports enums being persisted just like basic datatypes. But does it also allow retrieving/querying based on them? Google search could not point me to any such example code. Details: I have printed the Query just before being executed. So in two cases the query looks like - SELECT FROM com.xxx.yyy.User WHERE role == super ORDER BY key desc RANGE 0,50 SELECT FROM com.xxx.yyy.User WHERE role == admin ORDER BY key desc RANGE 0,50 Both above queries return me all the User entities from datastore in spite of datastore viewer showing some Users are of type 'admin' and some are of type 'super'.

    Read the article

  • Python - urllib2 & cookielib

    - by Adrian
    I am trying to open the following website and retrieve the initial cookie and use it for the second url-open BUT if you run the following code it outputs 2 different cookies. How do I use the initial cookie for the second url-open? import cookielib, urllib2 cj = cookielib.CookieJar() opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj)) home = opener.open('https://www.idcourts.us/repository/start.do') print cj search = opener.open('https://www.idcourts.us/repository/partySearch.do') print cj Output shows 2 different cookies every time as you can see: <cookielib.CookieJar[<Cookie JSESSIONID=0DEEE8331DE7D0DFDC22E860E065085F for www.idcourts.us/repository>]> <cookielib.CookieJar[<Cookie JSESSIONID=E01C2BE8323632A32DA467F8A9B22A51 for www.idcourts.us/repository>]>

    Read the article

  • T-SQL How To: Compare and List Duplicate Entries in a Table

    - by Dan7el
    SQL Server 2000. Single table has a list of users that includes a unique user ID and a non-unique user name. I want to search the table and list out any users that share the same non-unique user name. For example, my table looks like this: ID User Name Name == ========= ==== 0 parker Peter Parker 1 parker Mary Jane Parker 2 heroman Joseph (Joey) Carter Jones 3 thehulk Bruce Banner What I want to do is do a SELECT and have the result set be: ID User Name Name == ========= ==== 0 parker Peter Parker 1 parker Mary Jane Parker from my table. I'm not a T-SQL guru. I can do the basic joins and such, but I'm thinking there must be an elegant way of doing this. Barring elegance, there must be ANY way of doing this. I appreciate any methods that you can help me with on this topic. Thanks! ---Dan---

    Read the article

  • When will NAnt reach version 1.0

    - by sundar venugopal
    I like Nant very much. I do a lot of scripting with NAnt. It is a great little tool. Since NAnt is pre 1.0, when problems occur, I often think if that it is a problem with NAnt itself, but this is not always the case. One funny example: After running the oracle scripts I parsed the log output to make sure there was no problem. I was testing this with a small log file and it was fine. I used the task to load the file contents to a string property and used a regex to search for errors. When I used this script for a large log file, I stopped getting the "build failed" message at the bottom, because I was printing the error messages. Because the "build failed" was hiding at the top, I thought NAnt crashed, but it worked fine. It would be better for NAnt to have a 1.0 release. Any reasons why not?

    Read the article

  • Table Variables: an empirical approach.

    - by Phil Factor
    It isn’t entirely a pleasant experience to publish an article only to have it described on Twitter as ‘Horrible’, and to have it criticized on the MVP forum. When this happened to me in the aftermath of publishing my article on Temporary tables recently, I was taken aback, because these critics were experts whose views I respect. What was my crime? It was, I think, to suggest that, despite the obvious quirks, it was best to use Table Variables as a first choice, and to use local Temporary Tables if you hit problems due to these quirks, or if you were doing complex joins using a large number of rows. What are these quirks? Well, table variables have advantages if they are used sensibly, but this requires some awareness by the developer about the potential hazards and how to avoid them. You can be hit by a badly-performing join involving a table variable. Table Variables are a compromise, and this compromise doesn’t always work out well. Explicit indexes aren’t allowed on Table Variables, so one cannot use covering indexes or non-unique indexes. The query optimizer has to make assumptions about the data rather than using column distribution statistics when a table variable is involved in a join, because there aren’t any column-based distribution statistics on a table variable. It assumes a reasonably even distribution of data, and is likely to have little idea of the number of rows in the table variables that are involved in queries. However complex the heuristics that are used might be in determining the best way of executing a SQL query, and they most certainly are, the Query Optimizer is likely to fail occasionally with table variables, under certain circumstances, and produce a Query Execution Plan that is frightful. The experienced developer or DBA will be on the lookout for this sort of problem. In this blog, I’ll be expanding on some of the tests I used when writing my article to illustrate the quirks, and include a subsequent example supplied by Kevin Boles. A simplified example. We’ll start out by illustrating a simple example that shows some of these characteristics. We’ll create two tables filled with random numbers and then see how many matches we get between the two tables. We’ll forget indexes altogether for this example, and use heaps. We’ll try the same Join with two table variables, two table variables with OPTION (RECOMPILE) in the JOIN clause, and with two temporary tables. It is all a bit jerky because of the granularity of the timing that isn’t actually happening at the millisecond level (I used DATETIME). However, you’ll see that the table variable is outperforming the local temporary table up to 10,000 rows. Actually, even without a use of the OPTION (RECOMPILE) hint, it is doing well. What happens when your table size increases? The table variable is, from around 30,000 rows, locked into a very bad execution plan unless you use OPTION (RECOMPILE) to provide the Query Analyser with a decent estimation of the size of the table. However, if it has the OPTION (RECOMPILE), then it is smokin’. Well, up to 120,000 rows, at least. It is performing better than a Temporary table, and in a good linear fashion. What about mixed table joins, where you are joining a temporary table to a table variable? You’d probably expect that the query analyzer would throw up its hands and produce a bad execution plan as if it were a table variable. After all, it knows nothing about the statistics in one of the tables so how could it do any better? Well, it behaves as if it were doing a recompile. And an explicit recompile adds no value at all. (we just go up to 45000 rows since we know the bigger picture now)   Now, if you were new to this, you might be tempted to start drawing conclusions. Beware! We’re dealing with a very complex beast: the Query Optimizer. It can come up with surprises What if we change the query very slightly to insert the results into a Table Variable? We change nothing else and just measure the execution time of the statement as before. Suddenly, the table variable isn’t looking so much better, even taking into account the time involved in doing the table insert. OK, if you haven’t used OPTION (RECOMPILE) then you’re toast. Otherwise, there isn’t much in it between the Table variable and the temporary table. The table variable is faster up to 8000 rows and then not much in it up to 100,000 rows. Past the 8000 row mark, we’ve lost the advantage of the table variable’s speed. Any general rule you may be formulating has just gone for a walk. What we can conclude from this experiment is that if you join two table variables, and can’t use constraints, you’re going to need that Option (RECOMPILE) hint. Count Dracula and the Horror Join. These tables of integers provide a rather unreal example, so let’s try a rather different example, and get stuck into some implicit indexing, by using constraints. What unusual words are contained in the book ‘Dracula’ by Bram Stoker? Here we get a table of all the common words in the English language (60,387 of them) and put them in a table. We put them in a Table Variable with the word as a primary key, a Table Variable Heap and a Table Variable with a primary key. We then take all the distinct words used in the book ‘Dracula’ (7,558 of them). We then create a table variable and insert into it all those uncommon words that are in ‘Dracula’. i.e. all the words in Dracula that aren’t matched in the list of common words. To do this we use a left outer join, where the right-hand value is null. The results show a huge variation, between the sublime and the gorblimey. If both tables contain a Primary Key on the columns we join on, and both are Table Variables, it took 33 Ms. If one table contains a Primary Key, and the other is a heap, and both are Table Variables, it took 46 Ms. If both Table Variables use a unique constraint, then the query takes 36 Ms. If neither table contains a Primary Key and both are Table Variables, it took 116383 Ms. Yes, nearly two minutes!! If both tables contain a Primary Key, one is a Table Variables and the other is a temporary table, it took 113 Ms. If one table contains a Primary Key, and both are Temporary Tables, it took 56 Ms.If both tables are temporary tables and both have primary keys, it took 46 Ms. Here we see table variables which are joined on their primary key again enjoying a  slight performance advantage over temporary tables. Where both tables are table variables and both are heaps, the query suddenly takes nearly two minutes! So what if you have two heaps and you use option Recompile? If you take the rogue query and add the hint, then suddenly, the query drops its time down to 76 Ms. If you add unique indexes, then you've done even better, down to half that time. Here are the text execution plans.So where have we got to? Without drilling down into the minutiae of the execution plans we can begin to create a hypothesis. If you are using table variables, and your tables are relatively small, they are faster than temporary tables, but as the number of rows increases you need to do one of two things: either you need to have a primary key on the column you are using to join on, or else you need to use option (RECOMPILE) If you try to execute a query that is a join, and both tables are table variable heaps, you are asking for trouble, well- slow queries, unless you give the table hint once the number of rows has risen past a point (30,000 in our first example, but this varies considerably according to context). Kevin’s Skew In describing the table-size, I used the term ‘relatively small’. Kevin Boles produced an interesting case where a single-row table variable produces a very poor execution plan when joined to a very, very skewed table. In the original, pasted into my article as a comment, a column consisted of 100000 rows in which the key column was one number (1) . To this was added eight rows with sequential numbers up to 9. When this was joined to a single-tow Table Variable with a key of 2 it produced a bad plan. This problem is unlikely to occur in real usage, and the Query Optimiser team probably never set up a test for it. Actually, the skew can be slightly less extreme than Kevin made it. The following test showed that once the table had 54 sequential rows in the table, then it adopted exactly the same execution plan as for the temporary table and then all was well. Undeniably, real data does occasionally cause problems to the performance of joins in Table Variables due to the extreme skew of the distribution. We've all experienced Perfectly Poisonous Table Variables in real live data. As in Kevin’s example, indexes merely make matters worse, and the OPTION (RECOMPILE) trick does nothing to help. In this case, there is no option but to use a temporary table. However, one has to note that once the slight de-skew had taken place, then the plans were identical across a huge range. Conclusions Where you need to hold intermediate results as part of a process, Table Variables offer a good alternative to temporary tables when used wisely. They can perform faster than a temporary table when the number of rows is not great. For some processing with huge tables, they can perform well when only a clustered index is required, and when the nature of the processing makes an index seek very effective. Table Variables are scoped to the batch or procedure and are unlikely to hang about in the TempDB when they are no longer required. They require no explicit cleanup. Where the number of rows in the table is moderate, you can even use them in joins as ‘Heaps’, unindexed. Beware, however, since, as the number of rows increase, joins on Table Variable heaps can easily become saddled by very poor execution plans, and this must be cured either by adding constraints (UNIQUE or PRIMARY KEY) or by adding the OPTION (RECOMPILE) hint if this is impossible. Occasionally, the way that the data is distributed prevents the efficient use of Table Variables, and this will require using a temporary table instead. Tables Variables require some awareness by the developer about the potential hazards and how to avoid them. If you are not prepared to do any performance monitoring of your code or fine-tuning, and just want to pummel out stuff that ‘just runs’ without considering namby-pamby stuff such as indexes, then stick to Temporary tables. If you are likely to slosh about large numbers of rows in temporary tables without considering the niceties of processing just what is required and no more, then temporary tables provide a safer and less fragile means-to-an-end for you.

    Read the article

  • setting library include paths in c++

    - by Drew
    Hi all, I just installed gd2 using mac ports (sudo install gd2), which installed libraries in the following places: /opt/local/include/gd.h /opt/local/lib/libgd.dylib (link) /opt/local/lib/libgd.la /opt/local/lib/libgd.a So when I create my c++ app I add '#include "gd.h"', which throws: main.cpp:4:16: error: gd.h: No such file or directory If I set gd.h as an absolute path (as above)(not a solution, but was curious), I am thrown: g++ -L/opt/local/include -L/opt/local/lib main.o Heatmap_Map.o Heatmap_Point.o -o heatmap Undefined symbols: "_gdImagePng", referenced from: _main in main.o "_gdImageLine", referenced from: _main in main.o "_gdImageColorAllocate", referenced from: _main in main.o _main in main.o "_gdImageDestroy", referenced from: _main in main.o "_gdImageCreate", referenced from: _main in main.o "_gdImageJpeg", referenced from: _main in main.o ld: symbol(s) not found So, I understand this means that ld can not find the libraries it needs (hence trying to give it hints with the "-L" values). So after giving g++ the -L hints and the absolute path in #include, I can get it to work, but I don't think I have to do this, how can I make g++/ld search int eh right places for the libraries? Drew J. Sonne.

    Read the article

  • route finding between two designation on maps in android?

    - by androidbase Praveen
    i want to just find a shortest path between the location on map. we have to pass the location's geopoint then click the button to get direction. it will show the shortest path like a blue line. how to do this? i search about this. many of them import a package com.google.googlenav.*;. where i have to get this? Any Idea???? Edit:got downloaded the Cloudemade API. how to draw the lines between the points.

    Read the article

  • how to fetch app data(name, version, etc.) from android market?

    - by liuxingruo
    As we know, the apps in the apple app store has a unique itunes link, and we can fetch app data about the app from app store through the link. I am wondering how can i achieve this in the android market, just like the website http://www.androlib.com/ did? As long as I know, each app in the android market has a ID, which just like "com.gabrouze.magic", and the QR Code can be viewed in "http://chart.apis.google.com/chart?cht=qr&chs=135x135&chl=market://search?q=pname:com.gabrouze.magic". Thanks!

    Read the article

  • Qt 4.6 QLineEdit Style. How do I style the gray highlight border so it's rounded?

    - by krunk
    I'm styling a QLineEdit to have rounded borders for use as a search box. The rounding of the borders themselves were easy, but I can't figure out for the life of me how to round the highlighted portion of the widget when it has focus. I've tried QLineEdit::focus, but this only modifies the interior border. The images below show how the illusion of a rounded qlineedit is lost when it gains focus. QListView, QLineEdit { color: rgb(127, 0, 63); selection-color: white; border: 2px groove gray; border-radius: 10px; padding: 2px 4px; } Images with and without focus:

    Read the article

  • iphone: how to do Page turning

    - by srnm12
    hi frnds, I am facing the problem from a month and didn't find anything from google.. I am using UIView for pdf display. there is no problem with pdf but problem is with transition. I have to turn each page of pdf with realistic page turn exp. I search, dig a lot about that but i didn't get anything that how to do that. I don't want to use any API'S like codeflake or any other. all i want to do this by my own programming. First i read that this can be done using cocos2d or CAAnimation but i want to know how? b'coz realistic page i think so is completely 3d concept. Let me know guys how to do thay?? here's the example video: http://www.youtube.com/watch?v=oknMWvRO2XE I want animation just like in video....

    Read the article

  • encryption decryption function in php

    - by parthav
    import gnupg, urllib retk = urllib.urlopen("http://keyserver.pramberger.at/pks/" "lookup?op=get&search=userid for the key is required") pub_key = retk.read() #print pub_key gpg = gnupg.GPG(gnupghome="/tmp/foldername", verbose=True) print "Import the Key :", gpg.import_keys(pub_key).summary() print "Encrypt the Message:" msg = "Hellllllllllo" uid = "userid that has the key on public key server" enc = gpg.encrypt(msg, uid,always_trust=True) print "*The enc content***************************== ", enc this function written in python gives me encrypted message.The encryption is done using the public key which i am getting from public key server(pramberger.at). Now how can i implement the same functionality (getting the key from any public key server and using that key encrypt the message) in php

    Read the article

  • Eclipse warning: "<methodName> has non-API return type <parameterizedType>"

    - by Tenner
    My co-worker and I have come across this warning message a couple times recently. For the below code: package com.mycompany.product.data; import com.mycompany.product.dao.GenericDAO; public abstract class EntityBean { public abstract GenericDAO<Object, Long> getDAO(); // ^^^^^^ <-- WARNING OCCURS HERE } the warning appears in the listed spot as EntityBean.getDAO() has non-API return type GenericDAO<T, ID> A Google search for "has non-API return type" only shows instances where this message appears in problem lists. I.e., there's no public explanation for it. What does this mean? We can create a usage problem filter in Eclipse to make the message go away, but we don't want to do this if our usage is a legitimate problem. Thanks!

    Read the article

< Previous Page | 512 513 514 515 516 517 518 519 520 521 522 523  | Next Page >