Search Results

Search found 16086 results on 644 pages for 'screen scraping'.

Page 243/644 | < Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >

  • X11 and ARGB visuals: does DefaultDepth() never return 32?

    - by Andy
    Hi, I'm establishing a connection to the X server like this: display = XOpenDisplay(NULL); screen = DefaultScreen(display); depth = DefaultDepth(display, screen); I'm wondering now why "depth" is always set to 24. I would expect that it is only 24 when compositing is turned off, but in fact, it is still 24 even when I turn on compositing. So in order to get a 32-bit ARGB visual I need to call XGetVisualInfo() first with depth set explicitly to 32. Now to my question: Will DefaultDepth() generally never return more than 24 or is it just on my system? (my graphics board is somewhat dated...). I know that it could return 15, 16 or even 8 for a CLUT display but can it return 32? Or do I always have to use XGetVisualInfo() first to get a ARGB 32-bit visual? Thanks, Andy

    Read the article

  • Releasing the keyboard stops shake events. Why?

    - by Moshe
    1) How do I make a UITextField resign the keyboard and hide it? The keyboard is in a dynamically created subview whose superview looks for shake events. Resigning first responder seems to break the shake event handler. 2) how do you make the view holding the keyboard transparent, like see through glass? I have seen this done before. This part has been taken care of thanks guys. As always, code samples are appreciated. I've added my own to help explain the problem. EDIT: Basically, - (void)motionBegan:(UIEventSubtype)motion withEvent:(UIEvent *)event; gets called in my main view controller to handle shaking. When a user taps on the "edit" icon (a pen, in the bottom of the screen - not the traditional UINavigationBar edit button), the main view adds a subview to itself and animates it on to the screen using a custom animation. This subview contains a UINavigationController which holds a UITableView. The UITableView, when a cell is tapped on, loads a subview into itself. This second subview is the culprit. For some reason, a UITextField in this second subview is causing problems. When a user taps on the view, the main view will not respond to shakes unless the UITextField is active (in editing mode?). Additional info: My Motion Event Handler: - (void)motionBegan:(UIEventSubtype)motion withEvent:(UIEvent *)event { NSLog(@"%@", [event description]); SystemSoundID SoundID; NSString *soundFile = [[NSBundle mainBundle] pathForResource:@"shake" ofType:@"aif"]; AudioServicesCreateSystemSoundID((CFURLRef)[NSURL fileURLWithPath:soundFile], &SoundID); AudioServicesPlayAlertSound(SoundID); [self genRandom:TRUE]; } The genRandom: Method: /* Generate random label and apply it */ -(void)genRandom:(BOOL)deviceWasShaken{ if(deviceWasShaken == TRUE){ decisionText.text = [NSString stringWithFormat: (@"%@", [shakeReplies objectAtIndex:(arc4random() % [shakeReplies count])])]; }else{ SystemSoundID SoundID; NSString *soundFile = [[NSBundle mainBundle] pathForResource:@"string" ofType:@"aif"]; AudioServicesCreateSystemSoundID((CFURLRef)[NSURL fileURLWithPath:soundFile], &SoundID); AudioServicesPlayAlertSound(SoundID); decisionText.text = [NSString stringWithFormat: (@"%@", [pokeReplies objectAtIndex:(arc4random() % [pokeReplies count])])]; } } shakeReplies and pokeReplies are both NSArrays of strings. One is used for when a certain part of the screen is poked and one is for when the device is shaken. The app will randomly choose a string from the NSArray and display onscreen. For those of you who work graphically, here is a diagram of the view hierarchy: Root View -> UINavigationController -> UITableView -> Edit View -> Problem UITextfield

    Read the article

  • Monitoring instantaneous network throughput at one second intervals?

    - by Shaddi
    For a testing setup I have, I need to monitor the throughput through a "router"* at regular intervals of around 5 seconds or less (sub-second intervals would be very nice, but not required). Ideally, I would be able to generate a file which contained both the number of bytes and packets seen during each interval. I will eventually be generating a time-series of throughput from this data. On a previous setup using an older version of FreeBSD, there was a tool called "bpfmon" which gave me this information. However, I need to do this under a modern version of Linux (namely, Ubuntu 11.04). I have looked at both iptraf and iftop, but these do not appear to provide the resolution I need, nor do they seem to easily allow scraping the data I need. I understand iptables statistics may be able to give me what I'm after, but the examples I've seen of this seem to rely on repeatedly reading and resetting traffic counters, which seems like it could give inaccurate as read/reset is not an atomic operation. I already capture a tcpdump trace of the traffic I'm interested in on the link I want to monitor, so I am open to approaches which simply parse that. I feel like this must be a common problem though, so I am hoping there will be a standard "best practice" tool for accomplishing this. *I say "router" in quotes because I am really talking about a machine with two bridged NICs through which all the traffic I'm interested in passes.

    Read the article

  • Localizing other pages in Settings.bundle

    - by joneswah
    I have other pages within my app preferences which are stored as separate files within the settings.bundle. It has come time to localize my app and I can only seem to get the Root values to localize. I was wondering whether there was a trick? The following image shows that my second screen is stored within a file called "MyPrefs.plist" and I have created a corresponding named file "MyPrefs.strings" in the en.lproj directory. Mirroring the same naming and location as the Root.plist and Root.strings. The values with the Root.plist are converted as expected but not in the extra screen. Is there any trick to localizing secondary screens with the settings.bundle?

    Read the article

  • Recreating iPhone stock application with nested views

    - by john
    I am trying to create an app similar to the Yahoo Stocks app that comes on the iPhone, with the split-screen interface (table on the top, graph on the bottom). I'm struggling with the view hierarchy. What is the easiest way to implement a split-screen type of application. I basically want two views nested in a parent view. My problem is a little bit more complex because I want functionality like having a uipagecontrol (does this require another viewcontroller, or is simply implemented in the initial view controller)? To what degree do I need to use IB? I would prefer to do this all in Xcode. Thanks in advance!

    Read the article

  • iptables, blocking large numbers of IP Addresses

    - by Twirrim
    I'm looking to block IP addresses in a relatively automated fashion if they look to be 'screen scraping' content from websites that we host. In the past this was achieved by some ingenious perl scripts and OpenBSD's pf. pf is great in that you can provide it nice tables of IP addresses and it will efficiently handle blocking based on them. However for various reasons (before my time) they made the decision to switch to CentOS. iptables doesn't natively provide the ability to block large numbers of addresses (I'm told it wasn't unusual to be blocking 5000+), and I'm a bit cautious over adding that many rules into an iptable. ipt_recent would be awesome for doing this, plus it provides a lot of flexibility for just severely slowing down access, but there is a bug in the CentOS kernel that is stopping me from using it (reported, but awaiting fix). Using ipset would entail compiling a more up-to-date version of iptables than comes with CentOS which whilst I'm perfectly capable of doing it, I'd rather not do from a patching, security and consistency perspective. Other than those two it looks like nfblock is a reasonable alternative. Is anyone aware of other ways of achieving this? Are my concerns about several thousand IP addresses in iptables as individual rules unfounded?

    Read the article

  • Manually force touch points to reset in Windows 8?

    - by loyalpenguin
    Hi I developed a HTML5/JAVASCRIPT app that is supported using advertisement for the Win8 store. I just by chance happened to notice that if you touch the screen, drag your finger over the advertisement, and release your finger on top of the advertisement that the specific touch is not released and instead when you touch again it registers as a separate touch. This has caused my app to behave expectantly when the user interacts with the app using touch. I wanted to know if it was possible to force the touches to reset so that when the user touches the screen again it is always using "Touch(0)".

    Read the article

  • Resize AIR app window while dragging

    - by matt lohkamp
    So I've noticed Windows 7 has a disturbing tendency to prevent you from dragging the title bar of windows off the top of the screen. If you try - in this case, using an air app with a draggable area at the bottom of the window, allowing you to push the top of the window up past the screen - it just kicks the window back down far enough that the title bar is at the top of what it considers the 'visible area.' One solution would be to resize the app window as it moves, so that the title bar is always where windows wants it. How would you resize the window while you're dragging it, though? Would you do it like this? dragHitArea.addEventListener(MouseEvent.MOUSE_DOWN, function(e:MouseEvent):void{ stage.nativeWindow.height += 50; stage.nativeWindow.startMove(); stage.nativeWindow.height -= 50; }); see what's going on there? When I click, I'm doing startMove(), which is hooking into the OS' function for dragging a window around. I'm also increasing and decreasing the height of the window by 50 pixels - which should give me no net increase, right? Wrong - the first '.height +=' gets executed, but the '.height -=' after the .startMove() never runs. Why? update - If you're curious, I'm programming an air widget with fly-out menus which expand rightwards and upwards - and since those element can only be displayed within the boundaries of the application window itself (even though the window is set to be chromeless and transparent) I have to expand the application's borders to include the area that the menu 'pops up' into. In the extreme case, with the widget positioned bottom left, and the menus expanded completely across to the right side and top edge of the screen, the application area could very well cover the entire desktop. The problem is, when it's expanded like this, if the user drags it up and to the right, it causes the 'title bar' area of the application window to move above the top edge of the desktop area, where it would normally be unreachable; and Windows automatically re-positions the window back below that edge once the .startMove() operation is completed. So what I want to do is continually resize the height of the application so that the visual effect will be the same for the user, but for the benefit of the operating system the window's title bar will never be above that top boundary of the desktop area.

    Read the article

  • PreferenceActivity not showing twice on android

    - by Andrea
    Hello there, i've a PreferenceActivity which works perfectly the first time i launch it. But if i close it ( with the back button ) and then i re open it ( through a menu click in the main activity ) then i get a black screen. There is no preferences at all.. I can't figure it out why it should not working. It seems all the code is being called normally as the working one but no preferences are showing on the screen. Did someone had the same behaviour ?

    Read the article

  • Circular gradient in android

    - by sandis
    Im trying to make a gradient that emits from the middle of the screen in white, and turns to black as it moves toward the edges of the screen. As I make a "normal" gradient like this, I have been experimenting with different shapes: <shape xmlns:android="http://schemas.android.com/apk/res/android" android:shape="rectangle"> <gradient android:startColor="#E9E9E9" android:endColor="#D4D4D4" android:angle="270"/> </shape> When using the "oval"-shape I at least got a round shape, but there were no gradient effect. How can I achieve this? Cheers,

    Read the article

  • Help using Lisp debugger

    - by Joel
    I'm trying understand how to interpret the output of, and use, the Lisp debugger. I've got a pretty simple Backtrace for the evaluation of my function, but I cann't seem to work out how to use it to find out in which Lisp 'form' in my function the exception occurred. I'd appreciate any clues as to what I should be doing, to find the source of the error. I've attached a screen shot (if it's too small to read I can re-post it in parts), with the debug output, the function and the repl (please ignore my very wrong function, I'm just interested in learning how to use the debugger properly). In addition, I hit 'v' on the first frame to go to the source, but this resulted in the error at the bottom of the screen.

    Read the article

  • iPhone post-processing with a single FBO with Opengl ES 2.0?

    - by Jing
    I am trying to implement post-processing (blur, bloom, etc.) on the iPhone using OpenGL ES 2.0. I am running into some issues. When rendering during my second rendering step, I end up drawing a completely black quad to the screen instead of the scene (it appears that the texture data is missing) so I am wondering if the cause is using a single FBO. Is it incorrect to use a single FBO in the following fashion? For the first pass (regular scene rendering), I attach a texture as COLOR_ATTACHMENT_0 and render to a texture. glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texturebuffer, 0) For the second pass (post-processing), I attach the color renderbuffer to COLOR_ATTACHMENT_0 glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorRenderbuffer) Then use the texture from the first pass for rendering as a quad on the screen.

    Read the article

  • Is there an optimal way to render images in cocoa? Im using setNeedsDisplay

    - by Edward An
    Currently, any time I manually move a UIImage (via handling the touchesMoved event) the last thing I call in that event is [self setNeedsDisplay], which effectively redraws the entire view. My images are also being animated, so every time a frame of animation changes, i have to call setNeedsDisplay. I find this to be horrific since I don't expect iphone/cocoa to be able to perform such frequent screen redraws very quickly. Is there an optimal, more efficient way that I could be doing this? Perhaps somehow telling cocoa to update only a particular region of the screen (the rect region of the image)?

    Read the article

  • Can an app delete its own internal resources?

    - by user637884
    I am trying to find a way to delete an internal resource after an app installs. More specifically, I have a zip file included in the apk, that I unzip to the SD Card when the app is first run. But then want to delete the now unneeded zip file (the purpose being to save the user internal phone memory). I access the zip file with, Resources resources = this.getResources(); InputStream is = resources.openRawResource(R.raw.assets); But am uncertain how to then delete the resource (if even possible). I know one may ask why not simply install the app to SD Card at download. But the app includes a screen widget, and installing apps to the SD Card and using a screen widget is problematic. Thanks, Matt

    Read the article

  • iPad as programming platform--What future do touch screens have with programming?

    - by user94154
    I read this question a few weeks ago. I thought about it when I first saw the iPad. Do you think it would be possible to set up a development environment on the iPad? I think it would be awesome if there was an InstantRails App, a Django App, maybe even 280 North's Atlas could run on it :). Would you develop using an on-screen keyboard and a 10 inch screen? Steve Jobs seems to think touch-screens are the future of web browsing. What Future does touch have with programming?

    Read the article

  • Poll database using jQuery/Ajax

    - by Gav
    Hi guys, I am trying to use jQuery (latest version) & ajax to poll a mysql db every x seconds, post.php does a simple search query on the table and limits to 1 row. (eg SELECT id FROM TABLE LIMIT 1) I've got some other jQuery UI (using v1.8) code that displays some modal/dialog boxes on the screen, simply put if post.php returns something from the db I need to initialise the dialog to pop up onto the screen. I've done all the popup stuff I am just having issues joining all these bits together - i've added some pseudo code of how i expect this to work. Thanks in advance var refreshId = setInterval(function(){ $.ajax({ type: "POST", url: "post.php", data: "", success: function(html){ $("#responsecontainer").html(html); } }); }, 2000 );s /* proposed pseudocode */ if (ajax is successful & returns a db row to #responsecontainer) { show jQueryUI modal (done this bit already fortunately) }

    Read the article

  • iPhone webapp: my ressources don't get cached

    - by Savageman
    Hello, First of all, I'd like to say I'm not using any off-line feature from HTML5. I have a web-application which runs on the iPhone. When viewing it from safari, everything works quite well. But when I launch the application from the home screen (to remove the navigation bar), it can be really slow. I checked the logs in Apache and it appears that Safari does a good work to cache the resources (css / js / images), with Apache answering "304 Not Modified" when needed. However, when the web app run as a "real" application (navigation bar hidden), those resources doesn't get cached and Apache the content has to be transferred over and over again (response code 200 Ok + content), resulting in a significantly slower page load. How can I prevent this behavior? Do I need to always run my webapp inside Safari, even when it's launched from the home screen? Thank you!

    Read the article

  • What is the logic behind to use Semantic meaningful markup?

    - by metal-gear-solid
    Is it only for screen reader software? because browser renders both type of tags semantic and presentational in same manner. For example: for browser for us and for css <strong> and <b> is same. what is the purpose to semantic tag over presentational tag. is it for screen readers only or it's for better management of code? if it's for developer strong and b both can produce same result on browser.

    Read the article

  • OpenGL Coordinate system confusion

    - by user146780
    Maybe I set up GLUT wrong. Basically I want verticies to be reletive to their size in pixels. Ex:right now if I create a hexagon, it hakes up the whole screen even though the units are 6. #include <iostream> #include <stdlib.h> //Needed for "exit" function #include <cmath> //Include OpenGL header files, so that we can use OpenGL #ifdef __APPLE__ #include <OpenGL/OpenGL.h> #include <GLUT/glut.h> #else #include <GL/glut.h> #endif using namespace std; //Called when a key is pressed void handleKeypress(unsigned char key, //The key that was pressed int x, int y) { //The current mouse coordinates switch (key) { case 27: //Escape key exit(0); //Exit the program } } //Initializes 3D rendering void initRendering() { //Makes 3D drawing work when something is in front of something else glEnable(GL_DEPTH_TEST); } //Called when the window is resized void handleResize(int w, int h) { //Tell OpenGL how to convert from coordinates to pixel values glViewport(0, 0, w, h); glMatrixMode(GL_PROJECTION); //Switch to setting the camera perspective //Set the camera perspective glLoadIdentity(); //Reset the camera gluPerspective(45.0, //The camera angle (double)w / (double)h, //The width-to-height ratio 1.0, //The near z clipping coordinate 200.0); //The far z clipping coordinate } //Draws the 3D scene void drawScene() { //Clear information from last draw glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); //Reset the drawing perspective glPolygonMode(GL_FRONT_AND_BACK, GL_FILL); glBegin(GL_POLYGON); //Begin quadrilateral coordinates //Trapezoid glColor3f(255,0,0); for(int i = 0; i < 6; ++i) { glVertex2d(sin(i/6.0*2* 3.1415), cos(i/6.0*2* 3.1415)); } glEnd(); //End quadrilateral coordinates glutSwapBuffers(); //Send the 3D scene to the screen } int main(int argc, char** argv) { //Initialize GLUT glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA | GLUT_DEPTH); glutInitWindowSize(400, 400); //Set the window size //Create the window glutCreateWindow("Basic Shapes - videotutorialsrock.com"); initRendering(); //Initialize rendering //Set handler functions for drawing, keypresses, and window resizes glutDisplayFunc(drawScene); glutKeyboardFunc(handleKeypress); glutReshapeFunc(handleResize); glutMainLoop(); //Start the main loop. glutMainLoop doesn't return. return 0; //This line is never reached } How can I make it so that a polygon of 0,0 10,0 10,10 0,10 defines a polygon starting at the top left of the screen and is a width and height of 10 pixels? Thanks

    Read the article

  • How to implement a "hidden" Android app?

    - by mawg
    I would like an application which is not readily apparent to casual perusal of the Android. How best to activate it and bring its screen to the fore? Can I detect a special dialing sequence, like *1234#? Or a hotkey combination? When activated, I guess I can pop up an anonymous screen which does not mention the app, but only asks for a password. If password is ok, then show the app. Any suggestions?

    Read the article

  • Why is the dictionary debug visualizer less useful in Visual Studio 2010 for Silverlight debugging?

    - by Kevin
    I was debugging in Visual Studio 2010, which we just installed and trying to look at a dictionary in the quick watch window. I see Keys and Values, but drilling into those shows the Count and Non-Public members, Non-Public members continues the trail and I never see the values in the dictionary. I can run test.Take(10) and see the values, but why should I have to do that. I don't have VS 2008 installed anymore to compare, but it seems that I could debug a dictionary much easier. Why is it this way now? Is it just a setting I set somehow on my machine? Test code: Dictionary<string, string> test = new Dictionary<string, string>(); test.Add("a", "b"); EDIT: I've just tried the same debug in a Console app and it works as expected. The other project is a Silverlight 4 application, why are they different? Console Debug Screen Shot Silverlight 4 Debug Screen Shot:

    Read the article

  • Rewriting a simple Pygame 2D drawing function in C++

    - by Dominic Bou-Samra
    I have a 2D list of vectors (say 20x20 / 400 points) and I am drawing these points on a screen like so: for row in grid: for point in row: pygame.draw.circle(window, white, (particle.x, particle.y), 2, 0) pygame.display.flip() #redraw the screen This works perfectly, however it's much slower then I expected. I want to rewrite this in C++ and hopefully learn some stuff (I am doing a unit on C++ atm, so it'll help) on the way. What's the easiest way to approach this? I have looked at Direct X, and have so far followed a bunch of tutorials and have drawn some rudimentary triangles. However I can't find a simple (draw point).

    Read the article

  • addChild in the same layer

    - by CEAFDC
    I'm doing an application that puts tons of sprites on the screen in random position, like throwing cards on a table, but after a while it starts to drop the fps, because all the sprites still there. What I would like to do is adding the sprites but like an image, what's behind isn't stored. There are some way to do that? the code looks like this: var mySprite:MySprite = new MySprite(); mySprite.x = random; mySprite.y = random; mySprite.rotation = random; addChild(mySprite); Ps: I will not have to mess with them after they are on the screen.

    Read the article

  • jQuery drag and drop behavior with partially transparent image

    - by Aaron
    I'm trying to develop a drag-and-drop behavior based on the jQuery UI draggable behavior but am running into some road blocks. I want to be able to drag several images with transparent regions around a region of the screen. I want the user to be able to drag the image he clicks and not just whatever draggable div or PNG happens to be z-indexed on top. The below image is a screen grab from my test page. If I click the lower left region of the blue square through the red thing I should drag the square and not the red thing. The red thing is what gets dragged though because it is on top and the browser does not care about the transparency. My question is, how can I make it behave as expected in this situation and drag the square instead? Edit: Seems I can't attach images as a new user. See this URL for my example image: http://i42.tinypic.com/r1g4sk.png

    Read the article

  • Android image scaling to support multiple resolutions

    - by tyuo9980
    I've coded my game for 320x480 and I figure that the easiest way to support multiple resolutions is to scale the end image. What are your thoughts on this? Would it be cpu efficient to do it this way? I have all my images placed in the mdpi folder, I'll have it drawn unscaled on the screen onto a buffer, then scale it to fit the screen. all the user inputs will be scaled as well. I have these 2 questions: -How do you draw a bitmap without android automatically scaling it -How do you scale a bitmap?

    Read the article

< Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >