Search Results

Search found 27181 results on 1088 pages for 'core image'.

Page 111/1088 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • In a digital photo, detecting if a mountain is obscured by clouds.

    - by Gavin Brock
    The problem I have a collection of digital photos of a mountain in Japan. However the mountain is often obscured by clouds or fog. What techniques can I use to detect that the mountain is visible in the image? I am currently using Perl with the Imager module, but open to alternatives. All the images are taken from the exact same position - these are some samples. My naïve solution I started by taking several horizontal pixel samples of the mountain cone and comparing the brightness values to other samples from the sky. This worked well for differentiating good image 1 and bad image 2. However in the autumn it snowed and the mountain became brighter than the sky, like image 3, and my simple brightness test started to fail. Image 4 is an example of an edge case. I would classify this as a good image since some of the mountain is clearly visible.

    Read the article

  • Universal iPhone/iPad Windows-based app with Core Data crashes on iPhone SDK 4 beta 3

    - by Tarfa
    Hi all. I installed iPhone OS 4.0 Beta 3. When I create a new Windows-based universal app with Core Data (File New Project Windows-based Application --- select Universal in drop down and check the "Use Core Data for storage" check box) the app launches fine into the iPhone simulator but crashes in the iPad simulator. The console message returned is: dyld: Symbol not found: _OBJC_CLASS_$_NSURL Referenced from: /Users/tarfa/Library/Application Support/iPhone Simulator/3.2/Applications/5BB644DC-9370-4894-8884-BAEBA64D9ED0/Universal.app/Universal Expected in: /Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator3.2.sdk/System/Library/Frameworks/CoreFoundation.framework/CoreFoundation I'm stumped. Anyone else experiencing this problem?

    Read the article

  • iphone sdk conditional in switch function

    - by Oliver
    I'm trying to make a random image appear on the press of a button. So it generates a random number, and the switch algorithm swaps the chosen image with the one in the imgview. but I want a switch in the settings app to toggle which set of images to use. I know pretty much how to do it...it's just that it doesn't work. I'm missing some syntax thing...Please help, stackoverflow? it's my birthday. int Number = rand() %30; NSString *toggleValue = [[NSUserDefaults standardUserDefaults] stringForKey:@"enabled_preference"]; switch (Number) { if (*toggleValue == 0) { case 0: picture.image = [UIImage imageNamed:@"1.png"]; break; case 1: picture.image = [UIImage imageNamed:@"2.png"]; break;} else { case 0: picture.image = [UIImage imageNamed:@"3.png"]; break; case 1: picture.image = [UIImage imageNamed:@"4.png"]; break;} }

    Read the article

  • Console appliction with Multithreading on Single core.

    - by Harsha
    Hello all, I am reposting my question on Multithreading on Single core processor. Original question is: http://stackoverflow.com/questions/2856239/will-multi-threading-increase-the-speed-of-the-calculation-on-single-processor I have been asked a question, At any given time, only one thread is allowed to run on a single core. If so, why people use multithreading in a application. Lets say you are running console application and It is very much possible to write the application to run on the main thread. But still people go for multithreading.

    Read the article

  • Core Data inserting objects

    - by Joe
    I'm trying to get my head around Core Data on the iphone. This is code from Apple's 'Navigation based app using Core data' template (method - insertNewObject) // Create a new instance of the entity managed by the fetched results controller. NSManagedObjectContext *context = [fetchedResultsController managedObjectContext]; NSEntityDescription *entity = [[fetchedResultsController fetchRequest] entity]; NSManagedObject *newManagedObject = [NSEntityDescription insertNewObjectForEntityForName:[entity name] inManagedObjectContext:context]; It seems completely counter intuitive to me that the fetched results controller is used when inserting a new object. I changed the code to this: NSEntityDescription *entity = [NSEntityDescription entityForName:@"Event" inManagedObjectContext:managedObjectContext]; NSManagedObject *newManagedObject = [NSEntityDescription insertNewObjectForEntityForName:[entity name] inManagedObjectContext:managedObjectContext]; which works just as well and does not require access to the fetch request. Am I missing something here? Is there any good reason to use the fetched results controller in the insert method?

    Read the article

  • Grayscale image with colored spotlight in JavaFX

    - by DaUltimateTrooper
    I need a way to have a gray scale image in an ImageView and on mouse moved if the cursor position is in the ImageView bounds to show a colored spotlight on the mouse position. I have created a sample to help you understand what I need. This sample negates the colors of a colored image on the onMouseMoved event. package javafxapplication3; import javafx.scene.effect.BlendMode; import javafx.scene.Group; import javafx.scene.image.Image; import javafx.scene.image.ImageView; import javafx.scene.input.MouseEvent; import javafx.scene.paint.Color; import javafx.scene.paint.RadialGradient; import javafx.scene.paint.Stop; import javafx.scene.Scene; import javafx.scene.shape.Circle; import javafx.stage.Stage; var spotlightX = 0.0; var spotlightY = 0.0; var visible = false; var anImage = Image { url: "{__DIR__}picture1.jpg" } Stage { title: "Spotlighting" scene: Scene { fill: Color.WHITE content: [ Group { blendMode: BlendMode.EXCLUSION content: [ ImageView { image: anImage onMouseMoved: function (me: MouseEvent) { if (me.x > anImage.width - 10 or me.x < 10 or me.y > anImage.height - 10 or me.y < 10) { visible = false; } else { visible = true; } spotlightX = me.x; spotlightY = me.y; } }, Group { id: "spotlight" content: [ Circle { visible: bind visible translateX: bind spotlightX translateY: bind spotlightY radius: 60 fill: RadialGradient { centerX: 0.5 centerY: 0.5 stops: [ Stop { offset: 0.1, color: Color.WHITE }, Stop { offset: 0.5, color: Color.BLACK }, ] } } ] } ] }, ] }, } I am a total newbie what can I say...

    Read the article

  • Question about Architecture for Viewing Images in ASP.NET MVC App

    - by Charlie Flowers
    I have an approach in mind for an image viewer in a web app, and want to get a sanity check and any thoughts you stackoverflowers might have. Here's the whirlwind nutshell summary: I'm working on an ASP.NET MVC application that will run in my company's retail stores. Even though it is a web application, we own the store machines and have control over them. We have a "windows agent" running on the store machine which we can talk to from the browser via javascript (it is a WCF service, and our web app has permission to talk to it from the browser). One of the web pages needs to be an "image viewer" page with some common things like Rotate & Zoom. Now, there are some WebForms controls that offer Rotate and Zoom. However, they take up server resources and generate a good bit of traffic between the server and the browser. For example, the Rotate function would cause an ajax call to the server, which would then generate a new image written to a .NET Canvas object, which would then be written to a file on the server, which would then be returned from the ajax call and refreshed inside the browser. Normally, that's a pretty good way of doing things. But in our case, we have code running on the store machine that we can communicate with. This leads me to consider the following approach: When the user asks to view an image, we tell our "windows agent" to download it from our image server to the store machine. We then redirect our browser to our image viewer page, which will pull the image from the local file we just wrote to the store machine. When the user clicks "Rotate", we cause JavaScript code in the browser to call our "windows agent" software, asking it to perform the "Rotate" function. The "windows agent" does the rotation using the same kind of imaging control that would formerly have been used on the server, but it does so now on the store machine. Javascript in the browser then refreshes the image on the page to show the newly rotated image. Zoom and similar features would be implemented the same way. This seems to be much more efficient, scalable, and responsive for the end-users. However, I've never heard of anything like it being done, mostly because it's rare to have this combination of a web app plus a "windows agent" on the client machine. What do you think? Feasible? Reasonable? Any pitfalls I overlooked or improvements / suggestions you can see? Has anyone done anything like this who would like to offer the wisdom of experience? Thanks!

    Read the article

  • image not loading

    - by Delirium tremens
    trying to run the code // Create a label with an image Image image = new Image(display, "interspatial.gif"); Label imageLabel = new Label(shell, SWT.NONE); imageLabel.setImage(image); is giving me the error message Exception in thread "main" org.eclipse.swt.SWTException: i/o error (java.io.FileNotFoundException: interspatial.gif (O sistema não pode encontrar o arquivo especificado)) at org.eclipse.swt.SWT.error(Unknown Source) at org.eclipse.swt.SWT.error(Unknown Source) at org.eclipse.swt.graphics.ImageLoader.load(Unknown Source) at org.eclipse.swt.graphics.ImageDataLoader.load(Unknown Source) at org.eclipse.swt.graphics.ImageData.<init>(Unknown Source) at org.eclipse.swt.graphics.Image.<init>(Unknown Source) at examples.ch5.LabelExample.main(LabelExample.java:31) Caused by: java.io.FileNotFoundException: interspatial.gif (O sistema não pode encontrar o arquivo especificado) at java.io.FileInputStream.open(Native Method) at java.io.FileInputStream.<init>(FileInputStream.java:106) at java.io.FileInputStream.<init>(FileInputStream.java:66) at org.eclipse.swt.internal.Compatibility.newFileInputStream(Unknown Source) ... 5 more Additional information: In Eclipse, I had expanded Chapter05, then examples.ch5, then right-clicked LabelExample.java, then chose Run As, then 1 Java Application. I tried placing interspatial.gif in the Chapter05 dir, the examples dir, the ch5 dir and the images dir (probably related to an other source code from the same chapter). There is "a package examples.ch5;" line in the beginning of the file. Why is the image not loading?

    Read the article

  • Help in J2ME for creating image and parse it

    - by HAMED
    Can anyone tell me how i can parse a png image into many png images in J2ME??? for examole:I just wnat to have a source image 150*150 pixel and parse it to 10 image with 15*15 pixel. I write an elemantary code that have exeption. This is my code: public class HelloMIDlet extends MIDlet implements CommandListener { private boolean midletPaused = false; private Command exitCommand; private Form form; private StringItem stringItem; Image im , im2; Form form1 = null; public HelloMIDlet() { try { // create source image im = Image.createImage("/image1.JPG"); int height = im.getHeight() ; int width = im.getWidth() ; int x = 0 ; int y = 0 ; while ( y < height ){ while ( x < width ){ // create 15*15 pixel of source image im2 = im.createImage(im, x, y, 15, 15, Sprite.TRANS_NONE) ; x += 15 ; } y += 15 ; x = 0 ; } } catch (IOException ex) { ex.printStackTrace(); } } please help me to make it right....It's emergency! Thanks a lot guys...

    Read the article

  • Beagleboard: How do I send/receive data to/from the DSP?

    - by snakile
    I have a beagleboard with TMS320C64x+ DSP. I'm working on an image processing beagleboard application. Here's how it's going to work: The ARM reads an image from a file and put the image in a 2D array. The arm sends the matrix to the DSP. The DSP receives the matrix. The DSP performs the image processing algorithm on the received matrix (the algorithm code uses about 5MB of dynamically allocated memory). The DSP sends the processed image (matrix) to the ARM. The arm received the matrix. The arm saved the processed image to a file. I'v already written the code for steps 1,3,5. What is the easiest way to do steps 3+4 (sending the data)? Code examples are welcome.

    Read the article

  • Image appearing in the wrong place.

    - by Luke
    I have a list that I want to precede on each line with a small image. The CSS: .left div.arrow{background-image: url(../images/design/box.png); float:left; height:15px; width:15px; margin-right:2px;} .left a.list:link, a:visited, a:active {color:black; font-size:0.8em; text-decoration:none; display:block; float:left;} The HTML: <div class="panel">My quick links</div> <div class="arrow"></div> <a href="/messages.php?p=new" class="list">Send a new message</a> <br /> <div class="arrow"></div> <a href="/settings.php#password" class="list">Change my password</a> <br /> <div class="arrow"></div> <a href="/settings.php#picture" class="list">Upload a new site image</a> <br /> As you can see, each image should go before the writing. On the first line, the text "Send a new message" has the preceeding image. However, each line afterwards has the image at the end. So "Send a new message" has an image at the start and finish. It is like the images are staying at the end of the previous line. Any ideas?

    Read the article

  • c++ library for endian-aware reading of raw file stream metadata?

    - by Kache4
    I've got raw data streams from image files, like: vector<char> rawData(fileSize); ifstream inFile("image.jpg"); inFile.read(&rawData[0]); I want to parse the headers of different image formats for height and width. Is there a portable library that can can read ints, longs, shorts, etc. from the buffer/stream, converting for endianess as specified? I'd like to be able to do something like: short x = rawData.readLeShort(offset); or long y = rawData.readBeLong(offset) An even better option would be a lightweight & portable image metadata library (without the extra weight of an image manipulation library) that can work on raw image data. I've found that Exif libraries out there don't support png and gif.

    Read the article

  • iTextSharp Overlay Image

    - by pennylane
    Hi guys I have an instance where I have a logo image as part of some artwork.. If a user uploads a new logo I have a form field which is larger than the default logo. I then use that form field to position the new image. The problem is I need to set the background colour of that form field to white so that it covers the old logo in the event that the new image is smaller than the old logo.. what I have done is: foreach (var imageField in imageReplacements) { fields.SetFieldProperty(imageField.Key, "bgcolor", iTextSharp.text.Color.WHITE, null); fields.RegenerateField(imageField.Key); PdfContentByte overContent = stamper.GetOverContent(imageField.Value.PageNumber); float[] logoArea = fields.GetFieldPositions(imageField.Key); if (logoArea != null) { iTextSharp.text.Rectangle logoRect = new iTextSharp.text.Rectangle(logoArea[1], logoArea[2], logoArea[3], logoArea[4]); var logo = iTextSharp.text.Image.GetInstance(imageField.Value.Location); if (logo.Width >= logoRect.Width || logo.Height >= logoRect.Height) { logo.ScaleToFit(logoRect.Width, logoRect.Height); } logo.Alignment = iTextSharp.text.Image.ALIGN_LEFT; logo.SetAbsolutePosition(logoRect.Left, logoArea[2] + (logoRect.Height - logo.ScaledHeight) / 2); // left: logoArea[3] - logo.ScaledWidth + (logoRect.Width - logo.ScaledWidth) / 2 overContent.AddImage(logo); } } The problem with this is that the background colour of the field is set to white and the image then doesn't appear.. i remove the SetFieldProperty and RegenerateField commands and the image replacement works fine.. is there a way to set a stacking order on layers?

    Read the article

  • Resizing FileReference image then reuploading, only reuploads original image.

    - by pfunc
    I can;t figure out how to do this. Someone selects and image after calling FileReference.browse(). I take that image and make a thumbnail in flash. Then I upload that image like so: var newFileReq:URLRequest = new URLRequest(FILE_UPLOAD_TEMP); newFileReq.contentType = "application/octet-stream"; var fileReqVars:URLVariables = new URLVariables(); fileReqVars.image = myThumbImage; fileReqVars.folder = "Thumbs"; newFileReq.data = fileReqVars; newFileReq.method = URLRequestMethod.POST; //upload the first image fileRef.addEventListener(Event.COMPLETE, onFirstFileUp); fileRef.upload(newFileReq, "Filedata"); All this does it upload the original image. How do I change the fileRef to upload the new thumb? I have traced out the size of the "myThumbImage" and it is correct. I have placed it visually on the stage after creating the thumb, and it seems like it works. But when I upload it to an aspx page (that basically just throws it into a folder), it uploads the original larger image.

    Read the article

  • User Control - dependency property to Change Image Issues

    - by mflair2000
    i'm having issues setting the Image from a dependency property. It seems like the trigger doesnt fire. I just want hide/show and image, or set the source if possible. public static readonly DependencyProperty HasSingleValueProperty = DependencyProperty.Register("HasSingleValue", typeof(bool), typeof(LevelControl), new FrameworkPropertyMetadata(false,FrameworkPropertyMetadataOptions.BindsTwoWayByDefault)); public bool HasSingleValue { get { return (bool)GetValue(HasSingleValueProperty); } set { SetValue(HasSingleValueProperty, value); } } public LevelControl() { this.InitializeComponent(); //this.DataContext = this; LayoutRoot.DataContext = this; } //Control Markup <Grid x:Name="LayoutRoot"> <Image x:Name="xGreenBarClientTX" HorizontalAlignment="Stretch" Height="13" Margin="7,8.5,7,0" Stretch="Fill" VerticalAlignment="Top" Width="47" Canvas.Left="181.67" d:LayoutOverrides="Height" > <Image.Style> <Style TargetType="{x:Type Image}"> <Style.Triggers> <DataTrigger Binding="{Binding HasSingleValue}" Value="True"> <Setter Property="Opacity" Value="100"/> </DataTrigger> <DataTrigger Binding="{Binding HasSingleValue}" Value="False"> <Setter Property="Opacity" Value="0"/> </DataTrigger> </Style.Triggers> </Style> </Image.Style> </Image>

    Read the article

  • Issue displaying a local image from XAML

    - by Flack
    Hello, I have the below simple xaml: <Window x:Class="WpfApplication1.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="300" Width="300"> <Grid> <Image Source="happyface.jpg"/> </Grid> </Window> happyface.jpg is included in the project and its Build Action is set to "Content" and Copy To Ouptput Directory is set to "Copy Always". When looking at the app through the VS designer, everything is ok and I see the image. However, when I run the app, no image is displayed. I see the image is copied to the out out directory. If I put in the entire path as the source (C:\SANDBOX\WpfApplication1\WpfApplication1\bin\Debug") it works. Any ideas as to why the image is not displayed when I run the app? I read about pack URIs but thought that to just simply reference a loose image in the current directory, I can just use the image name. Thank you.

    Read the article

  • Has jQuery core development been slowing down?

    - by David Murdoch
    So, I regularly head over to jQuery's Commit History on GitHub just to read through the new code committed to jQuery core. But there hasn't been anything new committed since April 24th. I've already read through jQuery core a few times and I'm pretty familiar with it which is why I like reading the commits. I just like to see what changed, why it was changed, etc. Why has there been a slow down in jQuery commits on GitHub? Anyone else have some recommendations for where I can go to view good javascript code being developed? My motive for reading jQuery's commit history is similar to the reasons I browse through accepted answers here on stackoverflow - to learn from people smarter than me. With that said, I am interested in the answer to this questions title, but I am more interested in finding a substitute to reading the jQuery commits.

    Read the article

  • How to create an iPad directory view like animation?

    - by mahes25
    In the recent iOS 4.2 update, Apple introduced a nice animation for creating one level directories. I am trying to figure out how to implement a similar animation in my project. I would deeply appreciate it if anyone could give me any pointers to do this efficiently. From my investigation, I believe this animation or effect could be done very efficiently using Core Image which would allow me to write a custom filter. Unfortunately, Core Image is not available in iPhone. So how can do it? I read a blog post that explained a core animation scheme to create an iPad flip clock. The problem I have is similar but has important differences. Besides, I not excited about saving the subimage combinations, which I believe can cause a memory issue. Please enlighten me on the possible ways of doing this animation. I am relatively new to iOS programming, so I might have missed obvious ways of doing this animation or effect.

    Read the article

  • .net Drawing.Graphics.FromImage() returns blank black image

    - by joox
    I'm trying to rescale uploaded jpeg in asp.net So I go: Image original = Image.FromStream(myPostedFile.InputStream); int w=original.Width, h=original.Height; using(Graphics g = Graphics.FromImage(original)) { g.ScaleTransform(0.5f, 0.5f); ... // e.g. using (Bitmap done = new Bitmap(w, h, g)) { done.Save( Server.MapPath(saveas), ImageFormat.Jpeg ); //saves blank black, though with correct width and height } } this saves a virgin black jpeg whatever file i give it. Though if i take input image stream immediately into done bitmap, it does recompress and save it fine, like: Image original = Image.FromStream(myPostedFile.InputStream); using (Bitmap done = new Bitmap(original)) { done.Save( Server.MapPath(saveas), ImageFormat.Jpeg ); } Do i have to make some magic with g? upd: i tried: Image original = Image.FromStream(fstream); int w=original.Width, h=original.Height; using(Bitmap b = new Bitmap(original)) //also tried new Bitmap(w,h) using (Graphics g = Graphics.FromImage(b)) { g.DrawImage(original, 0, 0, w, h); //also tried g.DrawImage(b, 0, 0, w, h) using (Bitmap done = new Bitmap(w, h, g)) { done.Save( Server.MapPath(saveas), ImageFormat.Jpeg ); } } same story - pure black of correct dimensions

    Read the article

  • Pre loaded database on iPhone?

    - by Julian
    Hi, I have recently developed an app using core data as the storage db. The app allowed the user to read and write to the db. I am now developing a new app which the user doesnt need to write anything to the db, instead the app just needs to read the data. The data has relationships etc so cannot just use a plist or something similar. My question is should I use core data for such a requirement and if so how would i go about entering the data and then releasing the app. Would I have to code the data entry which would populate the db then remove all this code (as I dont want the database to repopulate every time the user opens the app)?? Is there a way to create a core data model using sql commands as with sqlite ie insert into..... etc? Any ideas/thoughts would be very helpful. Many thanks Jules

    Read the article

  • Is there an difference between transient properties defined in the data model, or in the custom subc

    - by mystify
    I was reading that setting the value of a transient property always results in marking the managed object as "dirty". However, what I don't get is this: If I make a subclass of NSManagedObject and use some extra properties which I don't need to be persistet, how does Core Data know about them and how can it mark the object as dirty when I access these? Again, they're not defined in the data model, so Core Data has no really good hint that they are there. Or does Core Data use some kind of introspection to analyze my custom class and figure out what properties I have in there?

    Read the article

  • Printf ubuntu Segmentation fault (core dumped)

    - by Someone
    I have this code: int a; printf("&a = %u\n",(unsigned)&a); printf("a\n"); printf("b\n"); printf("c\n"); printf("d\n"); I tried to print the pointer of a variable. But it fail on the row printf("a\n"); and says Segmentation fault (core dumped) Output: &a = 134525024 Segmentation fault (core dumped) When I remove the row printf("&a = %u\n",(unsigned)&a); from the code, its success. Output: a b c d What worng in my code?

    Read the article

  • How to drag only one image with SDK Iphone

    - by loka
    Hi! I want to create a little app that takes two images and i want to make only the image over draggable. After research, i found this solution : -(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [[ event allTouches] anyObject]; image.alpha = 0.7; if([touch view] == image){ CGPoint location = [touch locationInView:self.view]; image.center = location; } It works but the problem is that the image is draggable from its center and i don't want that. So i found another solution : - (void) touchesBegan:(NSSet*)touches withEvent:(UIEvent*)event { // Retrieve the touch point CGPoint pt = [[touches anyObject] locationInView:self.view]; startLocation = pt; [[self view] bringSubviewToFront:self.view]; } - (void) touchesMoved:(NSSet*)touches withEvent:(UIEvent*)event { // Move relative to the original touch point CGPoint pt = [[touches anyObject] locationInView:self.view]; frame = [self.view frame]; frame.origin.x += pt.x - startLocation.x; frame.origin.y += pt.y - startLocation.y; [self.view setFrame:frame]; } It works very well but when i add another image, all the images of the view are draggable at the same time.I'm a beginner with the iphone programmation and i have no idea of how i can only make the image over draggable. Thank you in advance for your help!!

    Read the article

  • What exactly is a memory page fault?

    - by dontWatchMyProfile
    From the docs: Note: Core Data avoids the term unfaulting because it is confusing. There's no “unfaulting” a virtual memory page fault. Page faults are triggered, caused, fired, or encountered. Of course, you can release memory back to the kernel in a variety of ways (using the functions vm_deallocate, munmap, or sbrk). Core Data describes this as “turning an object into a fault”. Is a Fault in Core Data essentially a memory page fault? I have only a slight idea about what a memory page is. I believe it's a kind of "piece of code in memory" which is needed to execute procedures and stuff like that, and as the app is runing, pieces of code are sucked into memory as "pages" and thrown away as they're not needed anymore. Probably 99% wrong ;) Anyone?

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >