Search Results

Search found 37931 results on 1518 pages for 'computer case'.

Page 440/1518 | < Previous Page | 436 437 438 439 440 441 442 443 444 445 446 447  | Next Page >

  • How can a link within a WebView load another layout using javascript?

    - by huffmaster
    So I have 2 layout files (main.xml, featured.xml) and both each have a single WebView. When the application starts "main.xml" loads a html file into it's WebView. In this html file I have a link that calls javascript that runs code in the Activity that loaded the html. Once back in this Activity code though I try running setContentView(R.layout.featured) but it just bombs out on me. If I debug it just dies without any real error and if I run it the application just Force closes. Am I going about this correctly or should I be doing something differently? final private int MAIN = 1; final private int FEATURED = 2; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); webview = (WebView) findViewById(R.id.wvMain); webview.getSettings().setJavaScriptEnabled(true); webview.getSettings().setSupportZoom(false); webview.addJavascriptInterface(new EHJavaScriptInterface(), "eh"); webview.loadUrl("file:///android_asset/default.html"); } final class EHJavaScriptInterface { EHJavaScriptInterface() { } public void loadLayout(final String lo) { int i = Integer.parseInt(lo.trim()); switch (i) { /****** THIS IS WHERE I'M BOMBING OUT *********/ case FEATURED: setContentView(R.layout.featured);break; case MAIN: setContentView(R.layout.main);break; } } }

    Read the article

  • Constructor Overloading

    - by Mark Baker
    Normally when I want to create a class constructor that accepts different types of parameters, I'll use a kludgy overloading principle of not defining any args in the constructor definition: e.g. for an ECEF coordinate class constructor, I want it to accept either $x, $y and $z arguments, or to accept a single array argument containg x, y and z values, or to accept a single LatLong object I'd create a constructor looking something like: function __construct() { // Identify if any arguments have been passed to the constructor if (func_num_args() > 0) { $args = func_get_args(); // Identify the overload constructor required, based on the datatype of the first argument $argType = gettype($args[0]); switch($argType) { case 'array' : // Array of Cartesian co-ordinate values $overloadConstructor = 'setCoordinatesFromArray'; break; case 'object' : // A LatLong object that needs converting to Cartesian co-ordinate values $overloadConstructor = 'setCoordinatesFromLatLong'; break; default : // Individual Cartesian co-ordinate values $overloadConstructor = 'setCoordinatesFromXYZ'; break; } // Call the appropriate overload constructor call_user_func_array(array($this,$overloadConstructor),$args); } } // function __construct() I'm looking at an alternative: to provide a straight constructor with $x, $y and $z as defined arguments, and to provide static methods of createECEFfromArray() and createECEFfromLatLong() that handle all the necessary extraction of x, y and z; then create a new ECEF object using the standard constructor, and return that Which option is cleaner from an OO purists perspective?

    Read the article

  • Using Reachability for Internet *or* local WiFi?

    - by randallmeadows
    I've searched SO for the answer to this question, and it's not really addressed, at least not to a point where I can make it work. I was originally only checking for Internet reachability, using: self.wwanReach = [Reachability reachabilityWithHostName:@"www.apple.com"]; [wwanReach startNotifer]; I now need to support a local WiFi connection (in the absence of reaching the Internet in general), and when I found +reachabilityForLocalWiFi, I also noticed there was +reachabilityForInternetConnection. I figured I could use these, instead of hard-coding "www.apple.com" in there, but alas, when I use self.wwanReach = [Reachability reachabilityForInternetConnection]; [wwanReach startNotifer]; self.wifiReach = [Reachability reachabilityForLocalWiFi]; [wifiReach startNotifer]; the reachability callback that I've set up "never" gets called, for values of "never" up to 10, 12, 15 minutes or so (which was as long as my patience lasted. (User's patience will be much less, I'm sure.) Switching back to +reachabilityWithHostName: works within seconds. I also tried each "pair" individually, in case there was an issue with two notifiers in progress simultaneously, but that made no difference. So: what is the appropriate way to determine reachability to either the Internet/WWAN or a local Wifi network (either one, or both)? [This particular use case is an iPhone or iPad connecting to a Mac mini computer-to-computer network; I'm sure other situations apply.]

    Read the article

  • Determine the folder of a SAS source file

    - by exhuma
    When I open a SAS file in enterprise guide and run it, it is executed on the server. The source file itself is located either on the production site or the development site. In both cases, it is executed the same server however. I want to be able to tell my script to store results in a relative folder. But if I write something like libname lib_out xport "..\tmp\foobar.xpt"; I get an error, because the working folder of the SAS Enterprise Guide process is not the location of my source file, but a folder on the server. And the folder ..\tmp does not exist there. Even if it would, the server process does not have write permission in that folder. I would like to determine from which folder the .sas file was loaded and set the working folder accordingly. In one case it's S:\Development\myproject\sas\foobar.sas and in the other case it's S:\Production\myproject\sas\foobar.sas It this possible at all? Or how would you do this?

    Read the article

  • Login as SYS user to Oracle 11g from .NET

    - by Jens Bannmann
    Using the Oracle Data Provider for .NET, my application connects to the database using the privileged SYS user. The connection string is as follows: Data Source=MyTnsName;User ID=sys;Password=MySysPassword;DBA Privilege=SYSDBA This works fine with Oracle 10, but Oracle 11 keeps complaining about an invalid username or password. I verified that the password is correct - other apps work fine with the same credentials. Note that for regular users (without the DBA Privilege part), connecting to Oracle 11 works perfectly. So, what's wrong? Update: This is not an issue with case sensitivity - when constructing the connection string, the password case is not altered by my code, and the password works fine with other, non-.NET-applications. I suspect that this might be caused by the Oracle 10 client I'm using to connect to the 11 database. Oracle states that the client is upward-compatible, the only drawback being that you cannot use some new features of the database. However, SYSDBA connections clearly are not a new Oracle 11 feature, and - again - a non-.NET-app (Keeptool Hora) can connect using the same setup. Any other ideas? Update 2: The problem persists when using an Oracle 11 client :-(

    Read the article

  • Problem: writing parameter values to data driven MSTEST output

    - by Shubh
    Hi, I am trying to extract some information about the parameter variants used in an MSTEST data driven test case from trx file. Currently, For data driven tests, I get the output of same testcase with different inputs as a sequence of tags , but there is no info about the value of the variants. Example: Suppose we have a [data driven]TestMethod1() and the data rows contain variations a and b. There are two variations a=1,b=2 for which the test passes and a=3,b=4 for which the test fails. If we can output the info that it was a=1,b=2 which passed and a=3 b=4 which failed in the trx file; the output will be meaningful. Better information about test case runs from the output file alone(without any dependencies). Investigating the test failure without rerunning the whole set If the data rows change in data source(now a=1,b=2 pass and a=5,b=6 fail) , easy to decipher that the errors are different; although the fail sequence is still the same(row 0 pass row 1 fail but now row1 is different) Has any of you gone through a similar problem? What did you follow? I tried to put the parameter value information in the Description attribute of TestMethod, it didnt work. Any other methods you think can work too? thanks, Shubhankar

    Read the article

  • Linq to Entities custom ordering via position mapping table

    - by Bigfellahull
    Hi, I have a news table and I would like to implement custom ordering. I have done this before via a positional mapping table which has newsIds and a position. I then LEFT OUTER JOIN the position table ON news.newsId = position.itemId with a select case statement CASE WHEN [position] IS NULL THEN 9999 ELSE [position] END and order by position asc, articleDate desc. Now I am trying to do the same with Linq to Entities. I have set up my tables with a PK, FK relationship so that my News object has an Entity Collection of positions. Now comes the bit I can't work out. How to implement the LEFT OUTER JOIN. I have so far: var query = SelectMany (n => n.Positions, (n, s) => new { n, s }) .OrderBy(x => x.s.position) .ThenByDescending(x => x.n.articleDate) .Select(x => x.n); This kinda works. However this uses a INNER JOIN so not what I am after. I had another idea: ret = ret.OrderBy(n => n.ShufflePositions.Select(s => s.position)); However I get the error DbSortClause expressions must have a type that is order comparable. I also tried ret = ret.GroupJoin(tse.ShufflePositions, n => n.id, s => s.itemId, (n, s) => new { n, s }) .OrderBy(x => x.s.Select(z => z.position)) .ThenByDescending(x => x.n.articleDate) .Select(x => x.n); but I get the same error! If anyone can help me out, it would be much appreciated!

    Read the article

  • Using a Context Menu to delete from a SQLite database in Android

    - by LordSnoutimus
    Hi, I have created a list view that displays the names and dates of items stored in a SQLite database, now I want to use a Context Menu to modify these items stored in the database such as edit the name, delete, and view. This is the code for the list view: public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.listview); SQLiteDatabase myDB = null; myDB = this.openOrCreateDatabase(MY_DB_NAME, MODE_PRIVATE, null); Cursor cur = myDB.rawQuery("SELECT _id, trackname, tracktime" + " FROM " + MY_DB_TABLE, null); ListAdapter adapter = new SimpleCursorAdapter(this, R.layout.listview, cur, new String[] { Constants.TRACK_NAME, Constants.TRACK_TIME}, new int[] { R.id.text1, R.id.text2}); ListView list = (ListView)findViewById(R.id.list); list.setAdapter(adapter); registerForContextMenu(list); } and the Context Menu... public void onCreateContextMenu(ContextMenu menu, View v, ContextMenuInfo menuInfo) { super.onCreateContextMenu(menu, v, menuInfo); menu.setHeaderTitle("Track Options"); menu.add(0, CHANGE_NAME, 0, "Change name"); menu.add(0, VIEW_TRACK, 0, "View track"); menu.add(0, SEND_TRACK, 0, "Send track"); menu.add(0, DELETE_TRACK, 0, "Delete track"); } I have used a Switch statement to control the menu items.. public boolean onContextItemSelected(MenuItem item) { switch (item.getItemId()){ case CHANGE_NAME: changename(); return true; case DELETE_TRACK: deletetrack(); return true; default: return super.onContextItemSelected(item); } So how would I go ahead and map the deletetrack(); method to find the ID of the track stored in the database to the item that has been selected in the list view?

    Read the article

  • Interchange structured data between Haskell and C

    - by Eonil
    First, I'm a Haskell beginner. I'm planning integrating Haskell into C for realtime game. Haskell does logic, C does rendering. To do this, I have to pass huge complexly structured data (game state) from/to each other for each tick (at least 30 times per second). So the passing data should be lightweight. This state data may laid on sequential space on memory. Both of Haskell and C parts should access every area of the states freely. In best case, the cost of passing data can be copying a pointer to a memory. In worst case, copying whole data with conversion. I'm reading Haskell's FFI(http://www.haskell.org/haskellwiki/FFICookBook#Working_with_structs) The Haskell code look specifying memory layout explicitly. I have a few questions. Can Haskell specify memory layout explicitly? (to be matched exactly with C struct) Is this real memory layout? Or any kind of conversion required? (performance penalty) If Q#2 is true, Any performance penalty when the memory layout specified explicitly? What's the syntax #{alignment foo}? Where can I find the document about this? If I want to pass huge data with best performance, how should I do that? *PS Explicit memory layout feature which I said is just C#'s [StructLayout] attribute. Which is specifying in-memory position and size explicitly. http://www.developerfusion.com/article/84519/mastering-structs-in-c/ I'm not sure Haskell has matching linguistic construct matching with fields of C struct.

    Read the article

  • Process-to-port mapping with SNMP and/or wmi/wmic in java

    - by Niddy888
    I'm trying to use SNMP to map outgoing ports on my host computer with the application running on the computer that is responsible for that communication. When running "netstat -ano" I get access to Protocol, Local Address (with port), Foreign Address (with port), State and PID. But I want to do this entirely without having to execute "cmd" from Java. By using SNMP OID: .1.3.6.1.2.1.25.4 (.iso.org.dod.internet.mgmt.mib-2.host.hrSWRun) I get access to PID (ex. 1704), Name (ex. cmd.exe), Path (ex. C:\Windows\system32) among others. There is an SNMP OID: .1.3.6.1.2.1.6.13 (.iso.org.dod.internet.mgmt.mib-2.tcp.tcpConnTable) that give you access to TCP connection state, local address, local port, remote address, remote port. But NO PID. So to sum up. My question again: Is there a way to "map" these tables together? Either directly in SNMP with other OID's or in conjunction with WMI / WMIC?

    Read the article

  • echo -e acts differently when run in a script by root on ubuntu

    - by ekrub
    When running a bash script on ubuntu 9.10, I get different behavior from bash echo's "-e" option depending on whether or not I'm running as root. Consider this script: $ cat echo-test if [ "`whoami`" = "root" ]; then echo "Running as root" fi echo Testing /bin/echo -e /bin/echo -e "foo\nbar" echo Testing bash echo -e echo -e "foo\nbar" When run as non-root user, I see this output: $ ./echo-test Testing /bin/echo -e foo bar Testing bash echo -e foo bar When run as root, I see this output: $ sudo ./echo-test Running as root Testing /bin/echo -e foo bar Testing bash echo -e -e foo bar Notice the "-e" being echoed in the last case ("-e foo" instead of "foo" on the second-to-last line). When running a script as root, the echo command runs as if "-e" was given and, if -e is given, the option itself is echoed. I can understand some subtle differences in behavior between /bin/echo and bash echo, but I would expect bash echo to behave the same no matter which user invokes it. Anyone know why this is the case? Is this a bug in bash echo? FYI -- I'm running GNU bash, version 4.0.33(1)-release (x86_64-pc-linux-gnu)

    Read the article

  • Why can't I roll a loop in Javascript?

    - by Carl Manaster
    I am working on a web page that uses dojo and has a number (6 in my test case, but variable in general) of project widgets on it. I'm invoking dojo.addOnLoad(init), and in my init() function I have these lines: dojo.connect(dijit.byId("project" + 0).InputNode, "onChange", function() {makeMatch(0);}); dojo.connect(dijit.byId("project" + 1).InputNode, "onChange", function() {makeMatch(1);}); dojo.connect(dijit.byId("project" + 2).InputNode, "onChange", function() {makeMatch(2);}); dojo.connect(dijit.byId("project" + 3).InputNode, "onChange", function() {makeMatch(3);}); dojo.connect(dijit.byId("project" + 4).InputNode, "onChange", function() {makeMatch(4);}); dojo.connect(dijit.byId("project" + 5).InputNode, "onChange", function() {makeMatch(5);}); and change events for my project widgets properly invoke the makeMatch function. But if I replace them with a loop: for (var i = 0; i < 6; i++) dojo.connect(dijit.byId("project" + i).InputNode, "onChange", function() {makeMatch(i);}); same makeMatch() function, same init() invocation, same everything else - just rolling my calls up into a loop - the makeMatch function is never called; the objects are not wired. What's going on, and how do I fix it? I've tried using dojo.query, but its behavior is the same as the for loop case.

    Read the article

  • Career day in kindergarten

    - by Péter Török
    I was invited to the kindergarten group of my elder daughter to talk and answer the kids' questions about my profession. There are 26 kids of age 4-6 in the group, plus 3 teachers who are fairly scared of anything related to programming and IT themselves, but bold enough to learn new tricks. I would have about 20-30 minutes, without projector or anything. They have an old computer though, which by its look may be a 486, and I am not even sure if it's functioning. My research turned up excellent earlier threads, with lots of good tips: How would you explain your job to a 5-year old? Career Day: how do I make “computer programmer” sound cool to 8 year olds? What things can I teach a group of children about programming in one day? My situation is different from each of the above though: the latter ones are concerned with older children, while the first one is about talking to a single kid (or elder person)—a group of 20 is a whole different challenge. How can I teach the kids and their teachers about programming in a fun way?

    Read the article

  • Few Basic Questions in Overriding

    - by Dahlia
    I have few problems with my basic and would be thankful if someone can clear this. What does it mean when I say base *b = new derived; Why would one go for this? We very well separately can create objects for class base and class derived and then call the functions accordingly. I know that this base *b = new derived; is called as Object Slicing but why and when would one go for this? I know why it is not advisable to convert the base class object to derived class object (because base class is not aware of the derived class members and methods). I even read in other StackOverflow threads that if this is gonna be the case then we have to change/re-visit our design. I understand all that, however, I am just curious, Is there any way to do this? class base { public: void f(){cout << "In Base";} }; class derived:public base { public: void f(){cout << "In Derived";} }; int _tmain(int argc, _TCHAR* argv[]) { base b1, b2; derived d1, d2; b2 = d1; d2 = reinterpret_cast<derived*>(b1); //gives error C2440 b1.f(); // Prints In Base d1.f(); // Prints In Derived b2.f(); // Prints In Base d1.base::f(); //Prints In Base d2.f(); getch(); return 0; } In case of my above example, is there any way I could call the base class f() using derived class object? I used d1.base()::f() I just want to know if there any way without using scope resolution operator? Thanks a lot for your time in helping me out!

    Read the article

  • C++: parsing with simple regular expression or shoud I use sscanf?

    - by Helltone
    I need to parse a string like func1(arg1, arg2); func2(arg3, arg4);. It's not a very complex parsing problem, so I would prefer to avoid resorting to flex/bison or similar utilities. My first approch was to try to use POSIX C regcomp/regexec or Boost implementation of C++ std::regex. I wrote the following regular expression, which does not work (I'll explain why further on). "^" "[ ;\t\n]*" "(" // (1) identifier "[a-zA-Z_][a-zA-Z0-9_]*" ")" "[ \t\n]*" "(" // (2) non-marking "\[" "(" // (3) non-marking "[ \t]*" "(" // (4..n-1) argument "[a-zA-Z0-9_]+" ")" "[ \t\n]*" "," ")*" "[ \t\n]*" "(" // (n) last argument "[a-zA-Z0-9_]+" ")" "]" ")?" "[ \t\n]*" ";" Note that the group 1 captures the identifier and groups 4..n-1 are intended to capture arguments except the last, which is captured by group n. When I apply this regex to, say func(arg1, arg2, arg3) the result I get is an array {func, arg2, arg3}. This is wrong because arg1 is not in it! The problem is that in the standard regex libraries, submarkings only capture the last match. In other words, if you have for instance the regex "((a*|b*))*" applied on "babb", the results of the inner match will be bb and all previous captures will have been forgotten. Another thing that annoys me here is that in case of error there is no way to know which character was not recognized as these functions provide very little information about the state of the parser when the input is rejected. So I don't know if I'm missing something here... In this case should I use sscanf or similar instead? Note that I prefer to use C/C++ standard libraries (and maybe boost).

    Read the article

  • pure/const functions in C++

    - by Albert
    Hi, I'm thinking of using pure/const functions more heavily in my C++ code. (pure/const attribute in GCC) However, I am curious how strict I should be about it and what could possibly break. The most obvious case are debug outputs (in whatever form, could be on cout, in some file or in some custom debug class). I probably will have a lot of functions, which don't have any side effects despite this sort of debug output. No matter if the debug output is made or not, this will absolutely have no effect on the rest of my application. Or another case I'm thinking of is the use of my own SmartPointer class. In debug mode, my SmartPointer class has some global register where it does some extra checks. If I use such an object in a pure/const function, it does have some slight side effects (in the sense that some memory probably will be different) which should not have any real side effects though (in the sense that the behaviour is in any way different). Similar also for mutexes and other stuff. I can think of many complex cases where it has some side effects (in the sense of that some memory will be different, maybe even some threads are created, some filesystem manipulation is made, etc) but has no computational difference (all those side effects could very well be left out and I would even prefer that). How does it work out in practice? If I mark such functions as pure/const, could it break anything (considering that the code is all correct)?

    Read the article

  • leak in fgets when assigning to buffer

    - by monkeyking
    I'm having problems understanding why following code leaks in one case, and not in the other case. The difference is while(NULL!=fgets(buffer,length,file))//doesnt leak while(NULL!=(buffer=fgets(buffer,length,file))//leaks I thought it would be the same. Full code below. #include <stdio.h> #include <stdlib.h> #define LENS 10000 void no_leak(const char* argv){ char *buffer = (char *) malloc(LENS); FILE *fp=fopen(argv,"r"); while(NULL!=fgets(buffer,LENS,fp)){ fprintf(stderr,"%s",buffer); } fclose(fp); fprintf(stderr,"%s\n",buffer); free(buffer); } void with_leak(const char* argv){ char *buffer = (char *) malloc(LENS); FILE *fp=fopen(argv,"r"); while(NULL!=(buffer=fgets(buffer,LENS,fp))){ fprintf(stderr,"%s",buffer); } fclose(fp); fprintf(stderr,"%s\n",buffer); free(buffer); }

    Read the article

  • Ruby TypeErrors involving `expected Data`

    - by Kenny Peng
    I've ran into situations where I have gotten these expected Data errors before, but they have always pointed to ActiveRecord not playing well with other libraries in the past. This piece of code: def load(kv_block, debug=false) # Converts a string block to a Hash using split kv_map = StringUtils.kv_array_to_hash(kv_block) # Loop through each key, value kv_map.each do |mem,val| # Format the member from camel case to underscore member = mem.camel_to_underscore() # If the object includes a method to set the key (i.e. the key # is a member of self), invoke the method, setting the value of # the member) if self.methods.include?(member.to_set_method_name()) then # Exception thrown here self.send(member.to_set_method_name(), val) # Else, check for the same case, this time for an instance variable elsif self.instance_variable_defined?(member.to_instance_var_name()) self.instance_variable_set(member.to_instance_var_name(), val) # Else, complain that the object doesn't understand the key with # respect to its class definition. else raise ArgumentError, "I don't know what to do with #{member}. #{self.class} does not have a member or function called #{member}" end end end produces the error wrong argument type #<Class:0x11a02088> (expected Data) (TypeError) in the each loop on the first if test. I've inspected a post-mortem debugging instance using rdebug, and running that line manually, it works without a hitch. Has anyone seen this error before and what's been your solution to it? I used to think it was ActiveRecord and other gems stomping on each other's definitions, but I removed any references to ActiveRecord and this still occurs.

    Read the article

  • How to write this function as a pL/pgSQl function ?

    - by morpheous
    I am trying to implement some business logic in a PL/pgSQL function. I have hacked together some pseudo code that explains the type of business logic I want to include in the function. Note: This function returns a table, so I can use it in a query like: SELECT A.col1, B.col1 FROM (SELECT * from some_table_returning_func(1, 1, 2, 3)) as A, tbl2 as B; The pseudocode of the pl/PgSQL function is below: CREATE FUNCTION some_table_returning_func(uid int, type_id int, filter_type_id int, filter_id int) RETURNS TABLE AS $$ DECLARE where_clause text := 'tbl1.id = ' + uid; ret TABLE; BEGIN switch (filter_type_id) { case 1: switch (filter_id) { case 1: where_clause += ' AND tbl1.item_id = tbl2.id AND tbl2.type_id = filter_id'; break; //other cases follow ... } break; //other cases follow ... } // where clause has been built, now run query based on the type ret = SELECT [COL1, ... COLN] WHERE where_clause; IF (type_id <> 1) THEN return ret; ELSE return select * from another_table_returning_func(ret,123); ENDIF; END; $$ LANGUAGE plpgsql; I have the following questions: How can I write the function correctly to (i.e. EXECUTE the query with the generated WHERE clause, and to return a table How can I write a PL/pgSQL function that accepts a table and an integer and returns a table (another_table_returning_func) ?

    Read the article

  • Javascript/jQuery: programmatically follow a link

    - by Dan
    In Javascript code, I would like to programmatically cause the browser to follow a link that's on my page. Simple case: <a id="foo" href="mailto:[email protected]">something</a> function goToBar() { $('#foo').trigger('follow'); } This is hypothetical as it doesn't actually work. And no, triggering click doesn't do it. I am aware of window.location and window.open but these differ from native link-following in some ways that matter to me: a) in the presence of a <base /> element, and b) in the case of mailto URLs. The latter in particular is significant. In Firefox at least, calling window.location.href = "mailto:[email protected]" causes the window's unload handlers to fire, whereas simply clicking a mailto link does not, as far as I can tell. I'm looking for a way to trigger the browser's default handling of links, from Javascript code. Does such a mechanism exist? Toolkit-specific answers also welcome (especially for Gecko).

    Read the article

  • OpenGLES - Rendering a background image only once and not wiping it

    - by chaosbeaker
    Hello, first time asking a question here but been watching others answers for a while. My own question is one for improving the performance of my program. Currently I'm wiping the viewFrameBuffer on each pass through my program and then rendering the background image first followed by the rest of my scene. I was wondering how I go about rendering the background image once, and only wiping the rest of the scene for updating/re-rendering. I tried using a seperate buffer but I'm not sure how to present this new buffer to the render buffer. // Set the current EAGLContext and bind to the framebuffer. This will direct all OGL commands to the // framebuffer and the associated renderbuffer attachment which is where our scene will be rendered [EAGLContext setCurrentContext:context]; glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer); // Define the viewport. Changing the settings for the viewport can allow you to scale the viewport // as well as the dimensions etc and so I'm setting it for each frame in case we want to change i glViewport(0, 0, screenBounds.size.width , screenBounds.size.height); // Clear the screen. If we are going to draw a background image then this clear is not necessary // as drawing the background image will destroy the previous image glClearColor(0.0f, 1.0f, 0.0f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); // Setup how the images are to be blended when rendered. This could be changed at different points during your // render process if you wanted to apply different effects glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); switch (currentViewInt) { case 1: { [background render:CGPointMake(240, 0) fromTopLeftBottomRightCenter:@"Bottom"]; // Other Rendering Code }} // Bind to the renderbuffer and then present this image to the current context glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer); [context presentRenderbuffer:GL_RENDERBUFFER_OES]; Hopefully by solving this I'll also be able to implement another buffer just for rendering particles as I can set them to always use a black background as their alpha source. Any help is greatly appreciated

    Read the article

  • CIE XYZ colorspace: do I have RGBA or XYZA?

    - by Tronic
    I plan to write a painting program based on linear combinations of xy plane points (0,1), (1,0) and (0,0). Such system works identically to RGB, except that the primaries are not within the gamut but at the corners of a triangle that encloses the entire gamut. I have seen the three points being referred to as X, Y and Z (upper case) somewhere, but I cannot find the page anymore (I marked them to the picture myself). My pixel format stores the intensity of each of those three components the same way as RGB does, together with alpha value. This allows using pretty much any image manipulation operation designed for RGBA without modifying the code. What is my format called? Is it XYZA, RGBA or something else? Google doesn't seem to know of XYZA. RGBA will get confused with sRGB + alpha (which I also need to use in the same program). Notice that the primaries X, Y and Z and their intensities have little to do with the x, y and z coordinates (lower case) that are more commonly used.

    Read the article

  • A generic error occurred in GDI+, JPEG Image to MemoryStream

    - by madcapnmckay
    Hi, This seems to be a bit of an infamous error all over the web. So much so that I have been unable to find an answer to my problem as my scenario doesn't fit. An exception gets thrown when I save the image to the stream. Weirdly this works perfectly with a png but gives the above error with jpg and gif which is rather confusing. Most similar problem out there relate to saving images to files without permissions. Ironically the solution is to use a memory stream as I am doing.... public static byte[] ConvertImageToByteArray(Image imageToConvert) { using (var ms = new MemoryStream()) { ImageFormat format; switch (imageToConvert.MimeType()) { case "image/png": format = ImageFormat.Png; break; case "image/gif": format = ImageFormat.Gif; break; default: format = ImageFormat.Jpeg; break; } imageToConvert.Save(ms, format); return ms.ToArray(); } } More detail to the exception. The reason this causes so many issues is the lack of explanation :( System.Runtime.InteropServices.ExternalException was unhandled by user code Message="A generic error occurred in GDI+." Source="System.Drawing" ErrorCode=-2147467259 StackTrace: at System.Drawing.Image.Save(Stream stream, ImageCodecInfo encoder, EncoderParameters encoderParams) at System.Drawing.Image.Save(Stream stream, ImageFormat format) at Caldoo.Infrastructure.PhotoEditor.ConvertImageToByteArray(Image imageToConvert) in C:\Users\Ian\SVN\Caldoo\Caldoo.Coordinator\PhotoEditor.cs:line 139 at Caldoo.Web.Controllers.PictureController.Croppable() in C:\Users\Ian\SVN\Caldoo\Caldoo.Web\Controllers\PictureController.cs:line 132 at lambda_method(ExecutionScope , ControllerBase , Object[] ) at System.Web.Mvc.ActionMethodDispatcher.Execute(ControllerBase controller, Object[] parameters) at System.Web.Mvc.ReflectedActionDescriptor.Execute(ControllerContext controllerContext, IDictionary`2 parameters) at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary`2 parameters) at System.Web.Mvc.ControllerActionInvoker.<>c__DisplayClassa.<InvokeActionMethodWithFilters>b__7() at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodFilter(IActionFilter filter, ActionExecutingContext preContext, Func`1 continuation) InnerException: OK things I have tried so far. Cloning the image and working on that. Retrieving the encoder for that MIME passing that with jpeg quality setting. Please can anyone help.

    Read the article

  • Hibernate not saving foreign key, but with junit it's ok

    - by Leonardo
    Hi All, I have this strange problem. In a J2ee webapp with spring, smartgwt and hibernate, it happens that I have a class A wich has a set of class B, both of them mapped to table A and table B. I wrote a simple test case for testing the service manager which is supposed to do insert, update, delete and everything work as expected especially during insert. In the end I have one record in A and records in B with foreign key to A. But when I try to call the service from the web app, the entity in B are saved without a foreign key reference. I am sure that the service is the same. One thing I noticed is that enabling hibernate logging, seems that when the service is called from the application, one more update is made: insert A insert B update A update B update B (foreign key only) update A <--- ??? update B <--- ??? Instead, when junit test case is run, the update is as follows: insert A insert B update A update B update B (foreign key only) I suppose the latest update is what is causing the erroe, maybe it is overwriting values. Considering that the app is using spring, with the well known mechanism of DAO + Manager, where can I investigate to solve this issue ? Someone told me that the session is not closed, so hibernate would do one more update before release the objects by itself. I am pretty sure that all the configuration hbm, xml, and the rest are fine...but I maybe wrong. thanks

    Read the article

  • prototype findElements querySelectorAll error

    - by JD
    i'm call the "down" function but am getting an invalid argument using 1.6.1_rc2 here's the html snippet: <TR id=000000214A class="activeRow searchResultsDisplayOver" conceptID="0000001KIU"> <TD> <DIV class=gridRowWrapper> <SPAN class=SynDesc>Asymmetric breasts</SPAN> <DIV class=buttonWrapper> <SPAN class=btnAddFav title="Add to Favorites">&nbsp;</SPAN> </DIV> </DIV> </TD> </TR> here's the code: var description = row.down('span.SynDesc').innerHTML; row is a dom reference to the element. prototype is appending a # then the id of the element: findElements: function(root) { root = root || document; var e = this.expression, results; switch (this.mode) { case 'selectorsAPI': if (root !== document) { var oldId = root.id, id = $(root).identify(); id = id.replace(/[\.:]/g, "\\$0"); e = "#" + id + " " + e; } results = $A(root.querySelectorAll(e)).map(Element.extend); <-- e = "#000000214A span.SynDesc" root.id = oldId; return results; case 'xpath': return document._getElementsByXPath(this.xpath, root); default: return this.matcher(root); } i get an "invalid argument" error? if i put a breakpoint before the offending line and change e to be equal to "span.SynDesc" it works fine. help. :)

    Read the article

< Previous Page | 436 437 438 439 440 441 442 443 444 445 446 447  | Next Page >