Search Results

Search found 18794 results on 752 pages for 'graphics design'.

Page 204/752 | < Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >

  • Pre Project Documentation

    - by DeanMc
    I have an issue that I feel many programmers can relate to... I have worked on many small scale projects. After my initial paper brain storm I tend to start coding. What I come up with is usually a rough working model of the actual application. I design in a disconnected fashion so I am talking about underlying code libraries, user interfaces are the last thing as the library usually dictates what is needed in the UI. As my projects get bigger I worry that so should my "spec" or design document. The above paragraph, from my investigations, is echoed all across the internet in one fashion or another. When a UI is concerned there is a bit more information but it is UI specific and does not relate to code libraries. What I am beginning to realise is that maybe code is code is code. It seems from my extensive research that there is no 1:1 mapping between a design document and the code. When I need to research a topic I dump information into OneNote and from there I prioritise features into versions and then into related chunks so that development runs in a fairly linear fashion, my tasks tend to look like so: Implement Binary File Reader Implement Binary File Writer Create Object to encapsulate Data for expression to the caller Now any programmer worth his salt is aware that between those three to do items could be a potential wall of code that could expand out to multiple files. I have tried to map the complete code process for each task but I simply don't think it can be done effectively. By the time one mangles pseudo code it is essentially code anyway so the time investment is negated. So my question is this: Am I right in assuming that the best documentation is the code itself. We are all in agreement that a high level overview is needed. How high should this be? Do you design to statement, class or concept level? What works for you?

    Read the article

  • Does Quartz2D test intersection of rect by line before drawing it.

    - by ddnv
    I'm drawing a big scheme that consist of a lot of lines. I do it in the drawRect: method of UIView. The scheme is larger than the layer of view and I check each line and draw it only if it intersects the visible rect. But at one moment I thought, should I do this? Maybe Quartz is already doing this test? So the question is: When I use function CGContextAddLineToPoint() does the Core Graphics tests this line for intersection with layer rect or it just draw it anyway?

    Read the article

  • How do I re-set a BMP file's resolution (DPI) indicator?

    - by Joshua Fox
    I have a BMP tagged as 299 DPI resolution. I'd like to change that to 99 DPI. Importantly, the DPI marker in a BMP has no structural meaning. An image has a certain width and height in pixels. The displaying application can show the image at any width in inches. So, the DPI is just a hint. However, I am dealing with some third-party software which behaves differently depending on this marker, so I need to re-set it. I will appreciate suggestions on how to do this programmatically (especially in Java) as well as in GUI graphics tools (e.g. Gimp).

    Read the article

  • How to "flick" a UIImageView?

    - by Meltemi
    I've got some UIViews that I'd like the user to be able to "flick" across the screen. They're not scroll views. They simply contain a raster image (png). Can anyone point me to some sample code, etc to help get me started? Something a little more heavyweight than "MoveMe" out there that helps detect a "flick" (vs a "nudge" or a drag and drop) and then carries the view off in the direction of the "flick"? OpenGL probably overkill. If possible I'd like to stay w/in the realm of Core Graphics/Animation.

    Read the article

  • Efficiently draw a grid in Windows Forms

    - by Joel
    I'm writing an implementation of Conway's Game of Life in C#. This is the code I'm using to draw the grid, it's in my panel_Paint event. g is the graphics context. for (int y = 0; y < numOfCells * cellSize; y += cellSize) { for (int x = 0; x < numOfCells * cellSize; x += cellSize) { g.DrawLine(p, x, 0, x, y + numOfCells * cellSize); g.DrawLine(p, 0, x, y + size * drawnGrid, x); } } When I run my program, it is unresponsive until it finishes drawing the grid, which takes a few seconds at numOfCells = 100 & cellSize = 10. Removing all the multiplication makes it faster, but not by very much. Is there a better/more efficient way to draw my grid? Thanks

    Read the article

  • Looking for a mobile platform to view vector data and use it like a simple map

    - by Orchestrator
    I would like to develop or use an existing platform that will allow me to view custom vector data and use it as a map on mobile phones such as Android/IPhone (Maybe even WP7). I'm hoping that there's already a good infrastructure for what I need so I would not need to develop a whole infrastructure by myself. In Conclusion - Is there any existing platform that may answer my needs? If not, how would you guys suggest I should begin? How should I save my vector data? How could I read it? Should I view it with a graphics engine like OpenGL? Is there any chance this solution could be cross-platform? I know that it's possible since it was already done with apps like Waze. And it works the same on iOS and Android. Thanks!

    Read the article

  • Java : VolatileImage slower than BufferedImage.

    - by Norswap
    I'm making a game in java and in used BufferedImages to render content to the screen. I had performance issues on low end machines where the game is supposed to run, so I switched to VolatileImage which are normally faster. Except they actually slow the whole thing down. The images are created with GraphicsConfiguration.createCompatibleVolatileImage(...) and are drawn to the screen with Graphics.drawImage(...) (follow link to see which one specifically). They are drawn upon a Canvas using double buffering. Does someone has an idea of what is going wrong here ?

    Read the article

  • Why do we need normalized coordinate system? Options

    - by jcyang
    Hi, I have problem understand following sentences in my textbook Computer Graphics with OpenGL. "To make viewing process independent of the requirements of any output device,graphic system convert object descriptions to normalized coordinates and apply the clipping routines." Why normalized coordinates could make viewing process independent of the requirements of any output devices? Isn't the projection coordinates already independent of output device?We only need to first scale and then translate the projection coordinate then we will get device coordinate. So why do we need first convert the projection coordinate to normalized coordinate first? "Clipping is usually performed in normlized coordinates.This allows us to reduce computations by first concatenating the various transformation matrices" Why clipping is usually performed in normlized coordinates? What kind of transformation concatenated? thanks. jcyang.

    Read the article

  • Java SWT - dissolve (fade) from one image to the next.

    - by carillonator
    I'm pretty new to Java and SWT, hoping to have one image dissolve into the next. I have one image now in a Label (relevant code): Device dev = shell.getDisplay(); try { Image photo = new Image(dev, "photo.jpg"); } catch(Exception e) { } Label label = new Label(shell, SWT.IMAGE_JPEG); label.setImage(photo); Now I'd like photo to fade into a different image at a speed that I specify. Is this possible as set up here, or do I need to delve more into the org.eclipse.swt.graphics api? Also, this will be a slide show that may contain hundreds of photos, only ever moving forward (never back to a previous image)---considering this, is there something I explicitly need to do in order to remove the old images from memory? Thanks!!

    Read the article

  • Tender vs. Requirements vs. Solution Design

    - by Tom Tom
    Conventionally, which of the above documents is deemed to hold the most weight when it comes to system acceptance? I recently had a conversation along these lines: It was argued that the initial requirements / tender documentation should be used to determine system acceptance. It was said that the solution design only serves to describe the way in which the system will solve the problem, not the problem it will solve. Furthermore, it was argued that if requirements are missed during solution design, the requirements should be referenced during system acceptance and that if any requirements were missed then the original tender should be referenced. Conversely, I suggested that - while requirements may be based on the original tender - they supersede it once agreed with the stakeholders. Furthermore, during solution design, analysis is performed to address and refine these initial requirements, translating them into a system capable of meeting the actual requirements. Once signed off by the relevant users, this solution design should absolutely represent the requirements (by virtue of the fact that it's designed upon them) but actually supersedes them as the basis for system acceptance. Is one of the above arguments more valid than the other?

    Read the article

  • Will the Driver Support for Intel HD Graphics be Improved in 12.10?

    - by Hiranya
    I recently installed Ubuntu 12.04 on a HP Pavilion dv4 laptop. This is a core i7 machine with Intel HD graphics and also a separate nVidia VGA card. I had a lot of issues getting Ubuntu 12.04 working on this system. First there were issues booting up the live CD for installation. I worked around that by using the 'nomodeset' option. Then I continued to have similar issues after installation has completed. So I had to permanently add the nomodeset option to my GRUB boot configuration. At the moment I have a working installation but there are many issues: Ubuntu GUI is a bit flaky at times. The mouse pointer goes on and off when hovering over certain icons. Certain things doesn't get rendered properly on the screen. I can't access any of the tty consoles. Hitting Ctrl+Alt+F[1-6] gives me a blank screen. And once that happens I can't even come back to the UI by hitting Ctrl+Alt+F7. I've realized that tty consoles are actually working. I just can't see the text. If I enter a command like 'sudo reboot' into the empty screen the machine reboots. Can't get external displays (monitors, projectors etc) working. But I think this is probably because the VGA out is wired to the nVidia card which is not being used by Linux. colord program crashes every now and then triggering a popup message. So my main question is, will the support for Intel HD graphics be improved in the next release? Or will I have to keep using the nomodeset option in the new release too? Also I appreciate if anybody can shed some light on any of the issues listed above. Thanks in advance.

    Read the article

  • What to do about "system running in low-graphics mode"?

    - by ubuntubabe
    My Dell which was 5 years old suddenly karked it and I had the "low graphics" black screen and useless dialogue box. As I believed it was a dead graphics card I went out and bought a brand new machine. I put aside the new machine and tried again in vain to open the Dell. I eventually got to the command line via Ctrl+Alt+F1. I logged into my account from there, and simply started a series of sudo apt-get remove of various softwares that I knew were installed on my PC (software without any great consequence like Google earth, tweak, Skype etc). Lo and behold after a sudo reboot my computer was fine again! So now I have 2 computers. BUT one week after buying the other one and installing 12.04 because I love Ubuntu, the SAME PROBLEM arrived! I once again deleted Google earth, Skype, and did a sudo reboot and everything worked as before. I think there is a bug or something in 12.04 as this problem has never arisen with any other versions of Ubuntu.

    Read the article

  • How can I get nVidia CUDA or OpenCL working on a laptop with nVidia discrete card/Intel Integrated Graphics?

    - by PeterDC
    Background: I'm a 3D artist (as a hobby) and have recently started using Ubuntu 12.04 LTS as a dual-boot with Windows 7. It's running on my a fairly new 64-bit Toshiba laptop with an nVidia GeForce GT 540M GPU (graphics card). It also, however has Intel Integrated Graphics (which I suspect Ubuntu's been using). So, when I render my 3D scenes to images on Windows, I am able to choose between using my CPU or my nVidia GPU (faster). From the 3D application, I can set the GPU to use either CUDA or OpenCL. In Ubuntu, there's no GPU option. After doing (too much?) research on the issues with Linux and the nVidia Optimus technology, I am slightly more enlightened, but a lot more confused. I don't care one bit about the Optimus technology, as battery life is not by any means an issue for me. Here's my question: What can I do to be able to use CUDA-utilizing programs (such as Blender) on my nVidia GPU in Ubuntu? Will I need nVidia drivers? (I have heard they don't play nicely with Optimus setups on Linux.) Is there at least a way to use OpenCL on my GPU in Ubuntu?

    Read the article

  • Initialization of components with interdependencies - possible antipattern?

    - by Rosarch
    I'm writing a game that has many components. Many of these are dependent upon one another. When creating them, I often get into catch-22 situations like "WorldState's constructor requires a PathPlanner, but PathPlanner's constructor requires WorldState." Originally, this was less of a problem, because references to everything needed were kept around in GameEngine, and GameEngine was passed around to everything. But I didn't like the feel of that, because it felt like we were giving too much access to different components, making it harder to enforce boundaries. Here is the problematic code: /// <summary> /// Constructor to create a new instance of our game. /// </summary> public GameEngine() { graphics = new GraphicsDeviceManager(this); Components.Add(new GamerServicesComponent(this)); //Sets dimensions of the game window graphics.PreferredBackBufferWidth = 800; graphics.PreferredBackBufferHeight = 600; graphics.ApplyChanges(); IsMouseVisible = true; screenManager = new ScreenManager(this); //Adds ScreenManager as a component, making all of its calls done automatically Components.Add(screenManager); // Tell the program to load all files relative to the "Content" directory. Assets = new CachedContentLoader(this, "Content"); inputReader = new UserInputReader(Constants.DEFAULT_KEY_MAPPING); collisionRecorder = new CollisionRecorder(); WorldState = new WorldState(new ReadWriteXML(), Constants.CONFIG_URI, this, contactReporter); worldQueryUtils = new WorldQueryUtils(worldQuery, WorldState.PhysicsWorld); ContactReporter contactReporter = new ContactReporter(collisionRecorder, worldQuery, worldQueryUtils); gameObjectManager = new GameObjectManager(WorldState, assets, inputReader, pathPlanner); worldQuery = new DefaultWorldQueryEngine(collisionRecorder, gameObjectManager.Controllers); gameObjectManager.WorldQueryEngine = worldQuery; pathPlanner = new PathPlanner(this, worldQueryUtils, WorldQuery); gameObjectManager.PathPlanner = pathPlanner; combatEngine = new CombatEngine(worldQuery, new Random()); } Here is an excerpt of the above that's problematic: gameObjectManager = new GameObjectManager(WorldState, assets, inputReader, pathPlanner); worldQuery = new DefaultWorldQueryEngine(collisionRecorder, gameObjectManager.Controllers); gameObjectManager.WorldQueryEngine = worldQuery; I hope that no one ever forgets that setting of gameObjectManager.WorldQueryEngine, or else it will fail. Here is the problem: gameObjectManager needs a WorldQuery, and WorldQuery needs a property of gameObjectManager. What can I do about this? Have I found an anti-pattern?

    Read the article

  • Page layout software that allows mixed visual and programatic editing

    - by Justin Love
    I'd like to use a programming model for custom graphics and precision placement, and an interactive visual mode for large scale layout and less precise placements. I've used tools (PostScript, various vector drawing programs) that do one of these modes well, but leave me pining for the other model. Which tools should I be investigating? I'm currently on OS X. Examples: Creating diagrams with precise spacing, sets of cards, either likely drawing from some sort of data.

    Read the article

  • Singleton design pattern vs Singleton beans in Spring container

    - by Peeyush
    As we all know we have beans as singleton by default in Spring container and if we have a web application based on Spring framework then in that case do we really need to implement Singleton design pattern to hold global data rather than just creating a bean through spring. Please bear with me if I'm not able to explain what I actually meant to ask.

    Read the article

  • Single Table Inheritance (Database Inheritance design options) pros and cons and in which case it us

    - by Yosef
    Hi, I study about today about 2 database design inheritance approaches: 1. Single Table Inheritance 2. Class Table Inheritance In my student opinion Single Table Inheritance make database more smaller vs other approaches because she use only 1 table. But i read that the more favorite approach is Class Table Inheritance according Bill Karwin. My Question is: Single Table Inheritance pros and cons and in which case it used? thanks, Yosef

    Read the article

  • Simple way to implement computer-go board in Java

    - by codingbear
    I want to make a simple Go board to design an Computer Go game. In a go game, you lie a "stone" (white or black) on a position where horizontal and vertical lines intersect. What are some simple ways to restrict users from placing their stones in other locations? Maybe I'm just not seeing a simple solution. EDIT I guess I should rephrase my question better: I want to know how to do the background image of Go board, so that I can lie my stones on the intersection of the horizontal and the vertical lines. I was thinking about getting a just regular Go board image, and when I'm actually rendering stones, I find right position of pixels to lie stones. However, that solution did not seem to be the best solution, since I need to worry about size of stone images and think about proportionality when I either expand or shrink the board window.

    Read the article

  • Preparing layout of web-design

    - by RPK
    I am starting design work of my first website. I know very little HTML. I don't know CSS and I am going to learn and use simultaneously. I want to know whether there is any tutorial on how to create a layout for any website. Any tips or best practices to be followed before designing starts?

    Read the article

  • Silverlight Visual Studio XAML Design view not working

    - by Piyush
    I have installed visual studio 2008 sp1, silverlight tools, silverlight sdk, silverlight toolkit 2009 but still when I open silverlight application silverlight tools are not showing on my tool window as well as silverlight XAML Design view(color code formate) is not working. Whole xaml code is coming in black color.

    Read the article

  • Web Safe Area (optimal resolution) for web app design

    - by M.A.X
    I'm in the process of designing a new web app and I'm wondering for what 'web safe area' should I optimize the app layout and design. I did some investigation and thinking on my own but wanted to share this to see what the general opinion is. Here is what I found: Optimal Display Resolution: w3schools web stats seems to be the most referenced source (however they state that these are results from their site and is biased towards tech savvy users) http://www.w3counter.com/globalstats.php (aggregate data from something like 15,000 different sites that use their tracking services) StatCounter Global Stats Display Resolution (Stats are based on aggregate data collected by StatCounter on a sample exceeding 15 billion pageviews per month collected from across the StatCounter network of more than 3 million websites) NetMarketShare Screen Resolutions (marketshare.hitslink.com) (a web analytics consulting firm, they get data from browsers of site visitors to their on-demand network of live stats customers. The data is compiled from approximately 160 million visitors per month) Display Resolution Summary: There is a bit of variation between the above sources but in general as of Jan 2011 looks like 1024x768 is about 20%, while ~85% have a higher resolution of at least 1280x768 (1280x800 is the most common of these with 15-20% of total web, depending on the source; 1280x1024 and 1366x768 follow behind with 9-14% of the share). My guess would be that the higher resolution values will be even more common if we filter on North America, and even higher if we filter on N.American corporate users (unfortunately I couldn't find any free geographically filtered statistics). Another point to note is that the 1024x768 desktop user population is likely lower than the aforementioned 20%, seeing as the iPad (1024x768 native display) is likely propping up those number. My recommendation would be to optimize around the 1280x768 constraint (*note: 1280x768 is actually a relatively rare resolution, but I think it's a valid constraint range considering that 1366x768 is relatively common and 1280 is the most common horizontal resolution). Browser + OS Constraints: To further add to the constraints we have to subtract the space taken up by the browser (assuming IE, which is the most space consuming) and the OS (assuming WinXP-Win7): Win7 has the biggest taskbar footprint at a height of 40px (XP's and Vista's is 30px) The default IE8 view uses up 25px at the bottom of the screen with the status bar and a further 120px at the top of the screen with the windows title bar and the browser UI (assuming the default 'favorites' toolbar is present, it would instead be 91px without the favorites toolbar). Assuming no scrollbar, we also loose a total of 4px horizontally for the window outline. This means that we are left with 583px of vertical space and 1276px of horizontal. In other words, a Web Safe Area of 1276 x 583 Is this a correct line of thinking? I tried to Google some design best practices but most still talk about designing around 1024x768 which seems to be quickly disappearing. Any help on this would be greatly appreciated! Thanks.

    Read the article

  • Java - how to design your own type?

    - by Walter White
    Hi all, Is it possible to design your own Java Type, say an extensible enum? For instance, I have user roles that a certain module uses and then a sub-package provides additional roles. What would be involved on the JDK side of things? Walter

    Read the article

  • Is it worthwhile to implement observer pattern in PHP?

    - by Extrakun
    I have been meaning to make use of design pattern in PHP, such as the observer pattern, but that I have to recreate the observers' relationship each time the page is loaded pains me. As references are saved as a new concrete objects in session, there is no way to preserve relationships between subscribers and their observers unless you use a GUID or some other properties to form a lookup, and store that property instead. With the cost of recreating the relationships each time a page is loaded, is it worthwhile to use design patterns such as observers in PHP, compared to having a clean design? Any real-world experience to share?

    Read the article

< Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >