Search Results

Search found 22900 results on 916 pages for 'pascal case'.

Page 742/916 | < Previous Page | 738 739 740 741 742 743 744 745 746 747 748 749  | Next Page >

  • issue to solve the geo tagging in android

    - by sundar
    I am a new guy in Android. How to implement the Geo-tag for images? I have tried by myself but not getting expected result. My code is like: @Override protected Dialog onCreateDialog(int id) { jpgDialog = null;; switch(id){ case ID_JPGDIALOG: Context mContext = this; jpgDialog = new Dialog(mContext); jpgDialog.setContentView(R.layout.jpgdialog); exifText = (TextView) jpgDialog.findViewById(R.id.text); geoText = (TextView)jpgDialog.findViewById(R.id.geotext); bmImage = (ImageView)jpgDialog.findViewById(R.id.image); bmOptions = new BitmapFactory.Options(); bmOptions.inSampleSize = 2; Button okDialogButton = (Button)jpgDialog.findViewById(R.id.okdialogbutton); okDialogButton.setOnClickListener(okDialogButtonOnClickListener); mapviewButton = (Button)jpgDialog.findViewById(R.id.mapviewbutton); mapviewButton.setOnClickListener(mapviewButtonOnClickListener); break; default: break; } return jpgDialog; } Please help me how to proceed?

    Read the article

  • atol(), atof(), atoi() function behaviours, is there a stable way to convert from/to string/integer

    - by Berkay
    In these days i'm playing with the C functions of atol(), atof() and atoi(), from a blog post i find a tutorial and applied: here are my results: void main() char a[10],b[10]; puts("Enter the value of a"); gets(a); puts("Enter the value of b"); gets(b); printf("%s+%s=%ld and %s-%s=%ld",a,b,(atol(a)+atol(b)),a,b,(atol(a)-atol(b))); getch(); } there is atof() which returns the float value of the string and atoi() which returns integer value. now to see the difference between the 3 i checked this code: main() { char a[]={"2545.965"}; printf("atol=%ld\t atof=%f\t atoi=%d\t\n",atol(a),atof(a),atoi(a)); } the output will be atol=2545 atof=2545.965000 atoi=2545 char a[]={“heyyou”}; now when you run the program the following will be the output (why?, is there any solution to convert pure strings to integer?) atol=0 atof=0 atoi=0 the string should contain numeric value now modify this program as char a[]={“007hey”}; the output in this case(tested in Red hat) will be atol=7 atof=7.000000 atoi=7 so the functions has taken 007 only not the remaining part (why?) Now consider this char a[]={“hey007?}; the output of the program will be atol=0 atof=0.000000 atoi=0 So i just want to convert my strings to number and then again to same text, i played with these functions and as you see i'm getting really interesting results? why is that? any other functions to convert from/to string/integer and vice versa?

    Read the article

  • Implementing list position locator in C++?

    - by jfrazier
    I am writing a basic Graph API in C++ (I know libraries already exist, but I am doing it for the practice/experience). The structure is basically that of an adjacency list representation. So there are Vertex objects and Edge objects, and the Graph class contains: list<Vertex *> vertexList list<Edge *> edgeList Each Edge object has two Vertex* members representing its endpoints, and each Vertex object has a list of Edge* members representing the edges incident to the Vertex. All this is quite standard, but here is my problem. I want to be able to implement deletion of Edges and Vertices in constant time, so for example each Vertex object should have a Locator member that points to the position of its Vertex* in the vertexList. The way I first implemented this was by saving a list::iterator, as follows: vertexList.push_back(v); v->locator = --vertexList.end(); Then if I need to delete this vertex later, then rather than searching the whole vertexList for its pointer, I can call: vertexList.erase(v->locator); This works fine at first, but it seems that if enough changes (deletions) are made to the list, the iterators will become out-of-date and I get all sorts of iterator errors at runtime. This seems strange for a linked list, because it doesn't seem like you should ever need to re-allocate the remaining members of the list after deletions, but maybe the STL does this to optimize by keeping memory somewhat contiguous? In any case, I would appreciate it if anyone has any insight as to why this happens. Is there a standard way in C++ to implement a locator that will keep track of an element's position in a list without becoming obsolete? Much thanks, Jeff

    Read the article

  • Read IFrame content using JavaScript

    - by Rajat
    Ok, This is my first time dealing seriously with IFrames and I cant seem to understand a few things: First the sample code I am testing with: <head> <script type="text/javascript"> function init(){ console.log("IFrame content: " + window.frames['i1'].document.getElementsByTagName('body')[0].innerHTML); } </script> </head> <body onload="init();"> <iframe name="i1" src="foo.txt"/> </body> the file "foo.txt" looks like this: sample text file Questions: 1) The iframe seems to be behaving as a HTML document and the file text is actually part of the body instead. Why ? Is it a rule for an IFrame to be a HTML document. Is it not possible for the content of an iframe to be just plain text ?? 2) The file content gets wrapped inside a pre tag for some reason. Why is this so ? Is it always the case? 3) My access method in the javascript is working but is there any other alternative? [native js solutions please] If the content is wrapped in a pre tag always then I will actually have to lookup inside the pre tag rather than lookup the innerHTML

    Read the article

  • Image processing on bifurcation diagram to get small eps size

    - by yCalleecharan
    Hello, I'm producing bifurcation diagrams (which are normally used in nonlinear dynamics). These diagrams identify abrupt changes in topologies due to stability changes. These abrupt changes occur as one or more parameters pass through some critical value(s). An example is here: http://en.wikipedia.org/wiki/File:LogisticMap_BifurcationDiagram.png On the above figure, some image processing has been done so as to make the plot more visually pleasant. A bifurcation diagram usually contains hundreds of thousands of points and the resulting eps file can become very big. Journal submission in the LaTeX format require that figures are to be submitted in the eps format. In my case one of such figures can result in about 6 MB in Matlab and even much more in Gnuplot. For the example in the above figure, 100,000 x values are calculated for each r and one can imagine that the resulting eps file would be huge. The site however explains some image processing that makes the plot more visually pleasing. Can anyone explain to me stepwise how go about? I can't understand the explanation provided in the "summary" section. Will the resulting image processing also reduce the figure size? Furthermore, any tips on reducing the file size of such a huge eps figure? Thanks a lot...

    Read the article

  • Ternary operator

    - by Antoine Leclair
    In PHP, I often use the ternary operator to add an attribute to an html element if it applies to the element in question. For example: <select name="blah"> <option value="1"<?= $blah == 1 ? ' selected="selected"' : '' ?>> One </option> <option value="2"<?= $blah == 2 ? ' selected="selected"' : '' ?>> Two </option> </select> I'm starting a project with Pylons using Mako for the templating. How can I achieve something similar? Right now, I see two possibilities that are not ideal. Solution 1: <select name="blah"> % if blah == 1: <option value="1" selected="selected">One</option> % else: <option value="1">One</option> % endif % if blah == 2: <option value="2" selected="selected">Two</option> % else: <option value="2">Two</option> % endif </select> Solution 2: <select name="blah"> <option value="1" % if blah == 1: selected="selected" % endif >One</option> <option value="2" % if blah == 2: selected="selected" % endif >Two</option> </select> In this particular case, the value is equal to the variable tested (value="1" = blah == 1), but I use the same pattern in other situations, like <?= isset($variable) ? ' value="$variable" : '' ?>. I am looking for a clean way to achieve this using Mako.

    Read the article

  • Android, NetworkInfo.getTypeName(), NullpointerException

    - by moppel
    I have an activity which shows some List entries. When I click on a list item my app checks which connection type is available ("WIF" or "MOBILE"), through NetworkInfo.getTypeName(). As soon as I call this method I get a NullpointerException. Why? I tested this on the emulator, cause my phone is currently not available (it's broken...). I assume this is the problem? This is the only explanation that I have, if that's not the case I have no idea why this would be null. Here's some code snippet: public class VideoList extends ListActivity{ ... public void onCreate(Bundle bundle){ final ConnectivityManager cm = (ConnectivityManager) getSystemService(Context.CONNECTIVITY_SERVICE); ... listview.setOnItemClickListener(new OnItemClickListener(){ public void onItemClick(AdapterView<?> parent, View view, int position, long id) { ... NetworkInfo ni = cm.getActiveNetworkInfo(); String connex = ni.getTypeName(); //Nullpointer exception here if(connex.equals("WIFI")doSomething(); } }); } }

    Read the article

  • BASH: How to count all the human readable files?

    - by user1687406
    I'm taking an intro course to UNIX and have a homework question that follows: How many files in the previous question are text files? A text file is any file containing human-readable content. (TRICK QUESTION. Run the file command on a file to see whether the file is a text file or a binary data file! If you simply count the number of files with the ".txt" extension you will get no points for this question.) The previous question simply asked how many regular files there were, which was easy to figure out by doing find . -type f | wc -l I'm just having trouble determining what "human readable content" is, since I'm assuming it means anything besides binary/assembly, but I thought that's what -type f displays. Maybe that's what the professor meant by saying "trick question"? This question has a follow up later that also asks "What text files contain the string "csc" in any mix of upper and lower case?". Obviously "text" is referring to more than just .txt files, but I need to figure out the first question to determine this!

    Read the article

  • WCF service reference namespace differs from original

    - by Thorarin
    I'm having a problem regarding namespaces used by my service references. I have a number of WCF services, say with the namespace MyCompany.Services.MyProduct (the actual namespaces are longer). As part of the product, I'm also providing a sample C# .NET website. This web application uses the namespace MyCompany.MyProduct. During initial development, the service was added as a project reference to the website and uses directly. I used a factory pattern that returns an object instance that implements MyCompany.Services.MyProduct.IMyService. So far, so good. Now I want to change this to use an actual service reference. After adding the reference and typing MyCompany.Services.MyProduct in the namespace textbox, it generates classes in the namespace MyCompany.MyProduct.MyCompany.Services.MyProduct. BAD! I don't want to have to change using directives in several places just because I'm using a proxy class. So I tried prepending the namespace with global::, but that is not accepted. Note that I hadn't even deleted the original assembly references yet, and "reuse types" is enabled, but no reusing was done, apparently. However, I don't want to keep the assembly references around in my sample website for it to work anyway. The only solution I've come up with so far is setting the default namespace for my web application to MyCompany (because it cannot be empty), and adding the service reference as Services.MyProduct. Suppose that a customer wants to use my sample website as a starting point, and they change the default namespace to OtherCompany.Whatever, this will obviously break my workaround. Is there a good solution to this problem? To summarize: I want to generate a service reference proxy in the original namespace, without referencing the assembly. Note: I have seen this question, but there was no solution provided that is acceptable for my use case. Edit: As John Saunders suggested, I've submitted some feedback to Microsoft about this: Feedback item @ Microsoft Connect

    Read the article

  • How does C#'s DateTime.Now affect query plan caching in SQL Server?

    - by Bill Paetzke
    Given: Let's say we have a stored procedure. It reports data back to a user on a webpage. The user can set a date range. If the user sets today's date as the "end date," which includes today's data, the web app passes DateTime.Now to the sql proc. Let's say that one user runs a report--5/1/2010 to now--over and over several times. On the webpage, the user sees "5/1/2010" to "5/4/2010." But the web app passes DateTime.Now to the sql proc as the end date. So, the end date in the proc will always be different, although the user is querying a similar date range. Assume the number of records in the table and number of users are large. So any performance gains matter. Hence the importance of the question. Question: Does passing DateTime.Now as a parameter to a proc prevent SQL Server from caching the query plan? If so, then is the web app missing out on huge performance gains? Possible Solution: I thought DateTime.Today.AddDays(1) would be a possible solution. It would allow the user to get the latest data and always pass the same end date to the sql proc--"5/5/2010" in this case. Please speak to this as well. Sample proc and execution (if that helps to understand): CREATE PROCEDURE GetFooData @StartDate datetime @EndDate datetime AS SELECT * FROM Foo WHERE LogDate >= @StartDate AND LogDate < @EndDate Here's a sample execution using DateTime.Now: EXEC GetFooData '2010-05-01', '2010-05-04 15:41:27' -- passed in DateTime.Now Here's a sample execution using DateTime.Today.AddDays(1) EXEC GetFooData '2010-05-01', '2010-05-05' -- passed in DateTime.Today.AddDays(1) The same data is returned for both procs, since the current time is: 2010-05-04 15:41:27.

    Read the article

  • How to generate lots of redundant ajax elements like checkboxes and pulldowns in Django?

    - by iJames
    Hello folks. I've been getting lots of answers from stackoverflow now that I'm in Django just be searching. Now I hope my question will also create some value for everybody. In choosing Django, I was hoping there was some similar mechanism to the way you can do partials in ROR. This was going to help me in two ways. One was in generating repeating indexed forms or form elements, and also in rendering only a piece of the page on the round trip. I've done a little bit of that by using taconite with a simple URL click but now I'm trying to get more advanced. This will focus on the form issue which boils down to how to iterate over a secondary object. If I have a list of photo instances, each of which has a couple of parameters, let's say a size and a quantity. I want to generate form elements for each photo instance separately. But then I have two lists I want to iterate on at the same time. Context: photos : Photo.objects.all() and forms = {} for photo in photos: forms[photo.id] = PhotoForm() In other words we've got a list of photo objects and a dict of forms based on the photo.id. Here's an abstraction of the template: {% for photo in photos %} {% include "photoview.html" %} {% comment %} So here I want to use the photo.id as an index to get the correct form. So that each photo has its own form. I would want to have a different action and each form field would be unique. Is that possible? How can I iterate on that? Thanks! {% endcomment %} Quantity: {{ oi.quantity }} {{ form.quantity }} Dimensions: {{ oi.size }} {{ form.size }} {% endfor %} What can I do about this simple case. And how can I make it where every control is automatically updating the server instead of using a form at all? Thanks! James

    Read the article

  • How to parse XML with special characters?

    - by Snooze
    Whenever I try to parse XML with special characters such as o or ???? I get an error. The xml documents claims to use UTF-8 encoding but that does not seem to be the case. Here is what the troublesome text looks like when I view the XML in Firefox: Bleach: The Diamond Dust Rebellion - MÅ? Hitotsu no HyÅ?rinmaru; Bleach - The DiamondDust Rebellion - Mou Hitotsu no Hyourinmaru On the actual website, Å? is actually the character o. <br /> One day, Doraemon and his friends meet Professor Mangetsu (æº?æ??å??ç??, Professor Mangetsu?), who studies magic and magical beings such as goblins, and his daughter Miyoko (ç¾?å¤?å­?, Miyoko?), and are warned of the dangerous approximation of the &quot;star of the Underworld&quot; to the Earth&#039;s orbit.<br /> <br /> And once again, on the actual website, those characters appear as ???? and ???. The actual XML file is formatted properly other than those special characters, which certainly do not appear to be using the UTF-8 encoding. Is there a way to get NSXML to parse these XML files?

    Read the article

  • What would be different in Java if Enum declaration didn't have the recursive part

    - by atamur
    Please see http://stackoverflow.com/questions/211143/java-enum-definition and http://stackoverflow.com/questions/3061759/why-in-java-enum-is-declared-as-enume-extends-enume for general discussion. Here I would like to learn what exactly would be broken (not typesafe anymore, or requiring additional casts etc) if Enum class was defined as public class Enum<E extends Enum> I'm using this code for testing my ideas: interface MyComparable<T> { int myCompare(T o); } class MyEnum<E extends MyEnum> implements MyComparable<E> { public int myCompare(E o) { return -1; } } class FirstEnum extends MyEnum<FirstEnum> {} class SecondEnum extends MyEnum<SecondEnum> {} With it I wasn't able to find any benefits in this exact case. PS. the fact that I'm not allowed to do class ThirdEnum extends MyEnum<SecondEnum> {} when MyEnum is defined with recursion is a) not relevant, because with real enums you are not allowed to do that just because you can't extend enum yourself b) not true - pls try it in a compiler and see that it in fact is able to compile w/o any errors PPS. I'm more and more inclined to believe that the correct answer here would be "nothing would change if you remove the recursive part" - but I just can't believe that.

    Read the article

  • JUnit test failing - complaining of missing data that was just inserted

    - by Collin Peters
    I have an extremely odd problem in my JUnit tests that I just can't seem to nail down. I have a multi-module java webapp project with a fairly standard structure (DAO's, service clasess, etc...). Within this project I have a 'core' project which contains some abstracted setup code which inserts a test user along with the necessary items for a user (in this case an 'enterprise', so a user must belong to an enterprise and this is enforced at the database level) Fairly simple so far... but here is where the strangeness begins some tests fail to run and throw a database exception where it complains that a user cannot be inserted because an enterprise does not exist. But it just created the enterprise in the preceding line of code! And there was no errors in the insertion of the enterprise. Stranger yet, if this test class is run by itself everything works fine. It is only when the test is run as part of the project that it fails! And the exact same abstracted code was run by 10+ tests before the one that fails! f I have been banging my head against a wall with this for days and haven't really made any progress. I'm not even sure what information to offer up to help diagnose this. Using JUnit 4.4, Spring 2.5.6, iBatis 2.3.0, Postgresql 8.3 Switching to org.springframework.jdbc.datasource.DriverManagerDataSource from org.apache.commons.dbcp.BasicDataSource changed the problem. Using DriverManagerDataSource the tests work for the first time, but now all of a sudden a lot of data isn't rolled back in the database! It leaves everything behind. All with no errors Tests fail when run via Eclipse & Maven Please ask for any info which may help me solve my problem!

    Read the article

  • What limits scaling in this simple OpenMP program?

    - by Douglas B. Staple
    I'm trying to understand limits to parallelization on a 48-core system (4xAMD Opteron 6348, 2.8 Ghz, 12 cores per CPU). I wrote this tiny OpenMP code to test the speedup in what I thought would be the best possible situation (the task is embarrassingly parallel): // Compile with: gcc scaling.c -std=c99 -fopenmp -O3 #include <stdio.h> #include <stdint.h> int main(){ const uint64_t umin=1; const uint64_t umax=10000000000LL; double sum=0.; #pragma omp parallel for reduction(+:sum) for(uint64_t u=umin; u<umax; u++) sum+=1./u/u; printf("%e\n", sum); } I was surprised to find that the scaling is highly nonlinear. It takes about 2.9s for the code to run with 48 threads, 3.1s with 36 threads, 3.7s with 24 threads, 4.9s with 12 threads, and 57s for the code to run with 1 thread. Unfortunately I have to say that there is one process running on the computer using 100% of one core, so that might be affecting it. It's not my process, so I can't end it to test the difference, but somehow I doubt that's making the difference between a 19~20x speedup and the ideal 48x speedup. To make sure it wasn't an OpenMP issue, I ran two copies of the program at the same time with 24 threads each (one with umin=1, umax=5000000000, and the other with umin=5000000000, umax=10000000000). In that case both copies of the program finish after 2.9s, so it's exactly the same as running 48 threads with a single instance of the program. What's preventing linear scaling with this simple program?

    Read the article

  • Sending buffered images between Java client and Twisted Python socket server

    - by PattimusPrime
    I have a server-side function that draws an image with the Python Imaging Library. The Java client requests an image, which is returned via socket and converted to a BufferedImage. I prefix the data with the size of the image to be sent, followed by a CR. I then read this number of bytes from the socket input stream and attempt to use ImageIO to convert to a BufferedImage. In abbreviated code for the client: public String writeAndReadSocket(String request) { // Write text to the socket BufferedWriter bufferedWriter = new BufferedWriter(new OutputStreamWriter(socket.getOutputStream())); bufferedWriter.write(request); bufferedWriter.flush(); // Read text from the socket BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(socket.getInputStream())); // Read the prefixed size int size = Integer.parseInt(bufferedReader.readLine()); // Get that many bytes from the stream char[] buf = new char[size]; bufferedReader.read(buf, 0, size); return new String(buf); } public BufferedImage stringToBufferedImage(String imageBytes) { return ImageIO.read(new ByteArrayInputStream(s.getBytes())); } and the server: # Twisted server code here # The analog of the following method is called with the proper client # request and the result is written to the socket. def worker_thread(): img = draw_function() buf = StringIO.StringIO() img.save(buf, format="PNG") img_string = buf.getvalue() return "%i\r%s" % (sys.getsizeof(img_string), img_string) This works for sending and receiving Strings, but image conversion (usually) fails. I'm trying to understand why the images are not being read properly. My best guess is that the client is not reading the proper number of bytes, but I honestly don't know why that would be the case. Side notes: I realize that the char[]-to-String-to-bytes-to-BufferedImage Java logic is roundabout, but reading the bytestream directly produces the same errors. I have a version of this working where the client socket isn't persistent, ie. the request is processed and the connection is dropped. That version works fine, as I don't need to care about the image size, but I want to learn why the proposed approach doesn't work.

    Read the article

  • Pecking order of pigeons?

    - by sc_ray
    I was going though problems on graph theory posted by Prof. Ericksson from my alma-mater and came across this rather unique question about pigeons and their innate tendency to form pecking orders. The question goes as follows: Whenever groups of pigeons gather, they instinctively establish a pecking order. For any pair of pigeons, one pigeon always pecks the other, driving it away from food or potential mates. The same pair of pigeons always chooses the same pecking order, even after years of separation, no matter what other pigeons are around. Surprisingly, the overall pecking order can contain cycles—for example, pigeon A pecks pigeon B, which pecks pigeon C, which pecks pigeon A. Prove that any finite set of pigeons can be arranged in a row from left to right so that every pigeon pecks the pigeon immediately to its left. Since this is a question on Graph theory, the first things that crossed my mind that is this just asking for a topological sort of a graphs of relationships(relationships being the pecking order). What made this a little more complex was the fact that there can be cyclic relationships between the pigeons. If we have a cyclic dependency as follows: A-B-C-A where A pecks on B,B pecks on C and C goes back and pecks on A If we represent it in the way suggested by the problem, we have something as follows: C B A But the above given row ordering does not factor in the pecking order between C and A. I had another idea of solving it by mathematical induction where the base case is for two pigeons arranged according to their pecking order, assuming the pecking order arrangement is valid for n pigeons and then proving it to be true for n+1 pigeons. I am not sure if I am going down the wrong track here. Some insights into how I should be analyzing this problem will be helpful. Thanks

    Read the article

  • Ruby on Rails Associations

    - by Eef
    Hey all, I am starting to create my sites in Ruby on Rails these days instead of PHP. I have picked up the language easily but still not 100% confident with associations :) I have this situation: User Model has_and_belongs_to_many :roles Roles Model has_and_belongs_to_many :users Journal Model has_and_belongs_to_many :roles So I have a roles_users table and a journals_roles table I can access the user roles like so: user = User.find(1) User.roles This gives me the roles assigned to the user, I can then access the journal model like so: journals = user.roles.first.journals This gets me the journals associated with the user based on the roles. I want to be able to access the journals like so user.journals In my user model I have tried this: def journals self.roles.collect { |role| role.journals }.flatten end This gets me the journals in a flatten array but unfortunately I am unable to access anything associated with journals in this case, e.g in the journals model it has: has_many :items When I try to access user.journals.items it does not work as it is a flatten array which I am trying to access the has_many association. Is it possible to get the user.journals another way other than the way I have shown above with the collect method? Hope you guys understand what I mean, if not let me know and ill try to explain it better. Cheers Eef

    Read the article

  • Should a Trim method generally in the Data Access Layer or with in the Domain Layer?

    - by jpierson
    I'm dealing with a database that contains data with inconsistencies such as white leading and trailing white space. In general I see a lot of developers practice defensive coding by trimming almost all strings that come from the database that may have been entered by a user at some point. In my oppinoin it is better to do such formating before data is persisted so that it is done only once and then the data can be in a consistent and reliable state. Unfortunatley this is not the case however which leads me to the next best solution, using a Trim method. If I trim all data as part of my data access layer then I don't have to concern myself with defensive trimming within the business objects of my domain layer. If I instead put the trimming responsibility in my business objects, such as with set accessors of my C# properties, I should get the same net results however the trim will be operating on all values assigned to my business objects properties not just the ones that come from the inconsistent database. I guess as a somewhat philisophical question that may determine the answer I could ask "Should the domain later be responsible for defensive/coercive formatting of data?" Would it make sense to have a set accessor for a PhoneNumber property on a business object accept a unformatted or formatted string and then attempt to format it as required or should this responsibility be pushed to the presentation and data access layers leaving the domain layer more strict in the type of data that it will accept? I think this may be the more fundamental question. Update: Below are a few links that I thought I should share about the topic of data cleansing. Information service patterns, Part 3: Data cleansing pattern LINQ to SQL - Format a string before saving? How to trim values using Linq to Sql?

    Read the article

  • Dynamic loading of shared objects using dlopen()

    - by Andy
    Hi, I'm working on a plain X11 app. By default, my app only requires libX11.so and the standard gcc C and math libs. My app has also support for extensions like Xfixes and Xrender and the ALSA sound system. But this feature shall be made optional, i.e. if Xfixes/Xrender/ALSA is installed on the host system, my app will offer extended functionality. If Xfixes or Xrender or ALSA is not there, my app will still run but some functionality will not be available. To achieve this behaviour, I'm not linking dynamically against -lXfixes, -lXrender and -lasound. Instead, I'm opening these libraries manually using dlopen(). By doing it this way, I can be sure that my app won't fail in case one of these optional components is not present. Now to my question: What library names should I use when calling dlopen()? I've seen that these differ from distro to distro. For example, on openSUSE 11, they're named the following: libXfixes.so libXrender.so libasound.so On Ubuntu, however, the names have a version number attached, like this: libXfixes.so.3 libXrender.so.1 libasound.so.2 So trying to open "libXfixes.so" would fail on Ubuntu, although the lib is obviously there. It just has a version number attached. So how should my app handle this? Should I let my app scan /usr/lib/ first manually to see which libs we have and then choose an appropriate one? Or does anyone have a better idea? Thanks guys, Andy

    Read the article

  • Getting basepath from view in zend framework

    - by mobmad
    Case: you're developing a site with Zend Framework and need relative links to the folder the webapp is deployed in. I.e. mysite.com/folder online and localhost:8080 under development. The following works nice in controllers regardless of deployed location: $this->_helper->redirector->gotoSimple($action, $controller, $module, $params); And the following inside a viewscript, ie. index.phtml: <a href="<?php echo $this->url(array('controller'=>'index', 'action' => 'index'), null, true); ?>"> But how do I get the correct basepath when linking to images or stylesheets? (in a layout.phtml file, for example): <img src='<?php echo WHAT_TO_TYPE_HERE; ?>images/logo.png' /> and $this->headLink()->appendStylesheet( WHAT_TO_TYPE_HERE . 'css/default.css'); WHAT_TO_TYPE_HERE should be replaced with something that gives <img src="/folder/images/logo.png />` on mysite.com and `<img src="/images/logo.png /> on localhost

    Read the article

  • Background color remaining when switching views

    - by Guy
    Hi Guys. I have a UIViewController named equationVC who's user interface is being programmatically created from another NSObject class called equationCon. Upon loading equationVC, a method called chooseInterface is called from the equationCon class. I have a global variable (globalVar) that points to a user defined string. chooseInterface finds a method in the equationCon class that matches the string globalVar points to. In this case, let's say that globalVar points to a string that is called "methodThatMatches." In methodThatMatches, another view controller needs to show the results of what methodThatMatches did. methodThatMatches creates a new equationVC that calls upon methodThatMatches2. As a test, each method changes the color of the background. When the application starts up, I get a purple background, but as soon as I hit backwards I get another purple screen, which should be yellow. I do not think that I am release the view properly. Can anyone help? -(void)chooseInterface { NSString* equationTemp = [globalVar stringByReplacingOccurrencesOfString:@" " withString:@""]; equationTemp = [equationTemp stringByReplacingOccurrencesOfString:@"'" withString:@""]; SEL equationName = NSSelectorFromString(equationTemp); NSLog(@"selector! %@",NSStringFromSelector(equationName)); if([self respondsToSelector:equationName]){ [self performSelector:equationName]; } } -(void)methodThatMatches{ self.equationVC.view.backgroundColor = [UIColor yellowColor]; [setGlobalVar:@"methodThatMatches2"]; EquationVC* temp = [[EquationVC alloc] init]; [[self.equationVC navigationController] pushViewController:temp animated:YES ]; [temp release]; } -(void)methodThatmatches2{ self.equationVC.view.backgroundColor = [UIColor purpleColor]; }

    Read the article

  • getting a "default" concrete class that implements an interface

    - by Roger Joys
    I am implementing a custom (and generic) Json.net serializer and hit a bump in the road that I could use some help on. When the deserializer is mapping to a property that is an interface, how can I best determine what sort of object to construct to deserialize to to place into the interface property. I have the following: [JsonConverter(typeof(MyCustomSerializer<foo>))] class foo { int Int1 { get; set; } IList<string> StringList {get; set; } } My serializer properly serializes this object, and but when it comes back in, and I try to map the json parts to to object, I have a JArray and an interface. I am currently instantiating anything enumerable like List as theList = Activator.CreateInstance(property.PropertyType); This works create to work with in the deserialization process, but when the property is IList, I get runtime complaints (obviously) about not being able to instantiate an interface. So how would I know what type of concrete class to create in a case like this? Thank you

    Read the article

  • UIButton receives touches at the wrong position after CABasicAnimaton applied

    - by Dmitry
    I need to add an animation to UIButton - simple moving animation. When I do this, the button moves, but the click area of that button remains at old position: UIButton *slideButton = [UIButton buttonWithType:UIButtonTypeRoundedRect]; [slideButton addTarget:self action:@selector(clicked:) forControlEvents:UIControlEventTouchUpInside]; [slideButton setFrame:CGRectMake(50, 50, 50, 50)]; [self.view addSubview:slideButton]; CABasicAnimation *slideAnimation = [CABasicAnimation animationWithKeyPath:@"position"]; [slideAnimation setRemovedOnCompletion:NO]; [slideAnimation setDuration:1]; [slideAnimation setFillMode:kCAFillModeBoth]; [slideAnimation setAutoreverses:NO]; slideAnimation.fromValue = [NSValue valueWithCGPoint:CGPointMake(50, 50)]; slideAnimation.toValue = [NSValue valueWithCGPoint:CGPointMake(100, 100)]; [slideButton.layer addAnimation:slideAnimation forKey:@"slide"]; I already saw an answer to change the frame of the button at the end of the animation (like here UIButton receives touchEvents at the wrong position after a CABasicAnimaton) But it's not my case because I should give possibility to user to click on the button while it's moving... Also, I can't use another approaches (like NSTimer) as a replacement for CABasicAnimation because I will have a complex path to move the object... Thanks for Your help!

    Read the article

  • Is Stream.Write thread-safe?

    - by Mike Spross
    I'm working on a client/server library for a legacy RPC implementation and was running into issues where the client would sometimes hang when waiting to a receive a response message to an RPC request message. It turns out the real problem was in my message framing code (I wasn't handling message boundaries correctly when reading data off the underlying NetworkStream), but it also made me suspicious of the code I was using to send data across the network, specifically in the case where the RPC server sends a large amount of data to a client as the result of a client RPC request. My send code uses a BinaryWriter to write a complete "message" to the underlying NetworkStream. The RPC protocol also implements a heartbeat algorithm, where the RPC server sends out PING messages every 15 seconds. The pings are sent out by a separate thread, so, at least in theory, a ping can be sent while the server is in the middle of streaming a large response back to a client. Suppose I have a Send method as follows, where stream is a NetworkStream: public void Send(Message message) { //Write the message to a temporary stream so we can send it all-at-once MemoryStream tempStream = new MemoryStream(); message.WriteToStream(tempStream); //Write the serialized message to the stream. //The BinaryWriter is a little redundant in this //simplified example, but here because //the production code uses it. byte[] data = tempStream.ToArray(); BinaryWriter bw = new BinaryWriter(stream); bw.Write(data, 0, data.Length); bw.Flush(); } So the question I have is, is the call to bw.Write (and by implication the call to the underlying Stream's Write method) atomic? That is, if a lengthy Write is still in progress on the sending thread, and the heartbeat thread kicks in and sends a PING message, will that thread block until the original Write call finishes, or do I have to add explicit synchronization to the Send method to prevent the two Send calls from clobbering the stream?

    Read the article

< Previous Page | 738 739 740 741 742 743 744 745 746 747 748 749  | Next Page >