Search Results

Search found 5920 results on 237 pages for 'hand drawn'.

Page 65/237 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • OpenGL ES multiple objects not being rendered

    - by ladiesMan217
    I am doing the following to render multiple balls move around the screen but only 1 ball is seen to appear and function. I don't know why the rest (count-1) balls are not being drawn public void onDrawFrame(GL10 gl) { // TODO Auto-generated method stub gl.glDisable(GL10.GL_DITHER); gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); gl.glMatrixMode(GL10.GL_MODELVIEW); gl.glClientActiveTexture(DRAWING_CACHE_QUALITY_HIGH); gl.glLoadIdentity(); for(int i=0;i<mParticleSystem.getParticleCount();i++){ gl.glPushMatrix(); gl.glTranslatef(mParticleSystem.getPosX(i), mParticleSystem.getPosY(i), -3.0f); gl.glScalef(0.3f, 0.3f, 0.3f); gl.glColor4f(r.nextFloat(), r.nextFloat(), r.nextFloat(), 1); gl.glEnable(GL10.GL_TEXTURE_2D); mParticleSystem.getBall(i).draw(gl); gl.glPopMatrix(); } } Here is my void draw(GL10 gl) method public void draw(GL10 gl){ gl.glEnable(GL10.GL_CULL_FACE); gl.glEnable(GL10.GL_SMOOTH); gl.glEnable(GL10.GL_DEPTH_TEST); // gl.glTranslatef(0.2f, 0.2f, -3.0f); // gl.glScalef(size, size, 1.0f); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertBuff); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glDrawArrays(GL10.GL_TRIANGLE_FAN, 0, points/2); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); }

    Read the article

  • C++ AMP recording and slides

    - by Daniel Moth
    Yesterday we announced C++ Accelerated Massive Parallelism. Many of you want to know more about the API instead of just meta information. I will trickle more code over the coming months leading up to the date when we will share actual bits. Until you have bits in your hand, it is only your curiosity that is blocked, so I ask you to be patient with that and allow me to release this on our own schedule ;-) You can now watch my 45-minute session introducing C++ AMP on channel9. You will also want to download the slides (pdf), because they are not readable in the recording. Comments about this post welcome at the original blog.

    Read the article

  • When do you change your major/minor/patch version number?

    - by dave4351
    Do you change your major/minor/patch version numbers right before you release or right after? Example: You just released 1.0.0 to the world (huzzah!). But wait, don't celebrate too much. 1.1.0 is coming out in six weeks! So you fix a bug and do a new build. What's that build called? 1.1.0.0 or 1.0.0.xxxy (where xxxy is the build number of 1.0.0 incremented)? Keep in mind you may have 100 features and bugs to go into 1.1.0. So it might be good to call it 1.0.0.xxxy, because you're nowhere close to 1.1.0. But on the other hand, another dev may be working on 2.0.0, in which case your build might be better named 1.1.0.0 and his 2.0.0.0 instead of 1.0.0.xxxy and 1.0.0.xxxz, respectively.

    Read the article

  • What does SVN do better than git?

    - by doug
    No question that the majority of debates over programmer tools distill to either personal choice (by the user) or design emphasis, i.e., optimizing design according to particular uses cases (by the tool builder). Text Editors are probably the most prominent example--a coder who works on a Windows at work and codes in Haskell on the Mac at home, values cross-platform and compiler integration and so chooses Emacs over Textmate, etc. It's less common that a newly introduced technology is genuinely, demonstrably superior to the extant options. I wonder if this is in fact the case with version-control systems, in particular, centralized VCS (CVS, SVN) versus distributed VCS (git, hg)? I used SVN for about five years, and SVN is currently used where I work. A little less than three years ago, I switched to git (and gitHub) for all of my personal projects. I can think of a number of advantages of git over subversion (and which for the most part abstract to advantages of distributed over centralized VCS), but I cannot think of one contra example--some task (that's relevant and arises in a programmers usual workflow) that subversion does better than git. The only conclusion I have drawn from this is that I don't have any data--not that git is better, etc. My guess is that such counter-examples exist, hence this question.

    Read the article

  • Checking for collisions on a 3D heightmap

    - by Piku
    I have a 3D heightmap drawn using OpenGL (which isn't important). It's represented by a 2D array of height data. To draw this I go through the array using each point as a vertex. Three vertices are wound together to form a triangle, two triangles to make a quad. To stop the whole mesh being tiny I scale this by a certain amount called 'gridsize'. This produces a fairly nice and lumpy, angular terrain kind of similar to something you'd see in old Atari/Amiga or DOS '3D' games (think Virus/Zarch on the Atari ST). I'm now trying to work out how to do collision with the terrain, testing to see if the player is about to collide with a piece of scenery sticking upwards or fall into a hole. At the moment I am simply dividing the player's co-ordinates by the gridsize to find which vertex the player is on top of and it works well when the player is exactly over the corner of a triangle piece of terrain. However... How can I make it more accurate for the bits between the vertices? I get confused since they don't exist in my heightmap data, they're a product of the GPU trying to draw a triangle between three points. I can calculate the height of the point closest to the player, but not the space between them. I.e if the player is hovering over the centre of one of these 'quads', rather than over the corner vertex of one, how do I work out the height of the terrain below them? Later on I may want the player to slide down the slopes in the terrain.

    Read the article

  • How to replace all images in Libreoffice with their description

    - by user30131
    I have a very long document containing lots of svg images created using the extension TexMaths. This extension uses the latex installation to create svg image of the inputted equation (or set of equations). The latex code for each equation (or set of equations) is embedded in the image as part of its Description. Such a Description can be accessed by right clicking the svg image and choosing the option Description. I want to replace all the svg images using a suitable macro, by the embedded descriptions. e.g. from The Einstein's famous equation, [svg embedded equation : E = mc 2], tells us that mass can be converted to energy and vice-versa. To The Einstein's famous equation, E = mc^2, tells us that mass can be converted to energy and vice-versa. This will allow me to convert by hand the odt file containing numerous TexMaths equations to LaTeX.

    Read the article

  • Questions before I revamp my rendering engine to use shaders (GLSL)

    - by stephelton
    I've written a fairly robust rendering engine using OpenGL ES 1.1 (fixed-function.) I've been looking into revamping the engine to use OpenGL ES 2.0, which necessitates that I use shaders. I've been absorbing information all day long and still have some questions. Firstly, lighting. The fixed-function pipeline is guaranteed to have at least 8 lights available. My current engine finds lights that are "close" to the primitives being drawn and enables them; I don't know how many lights are going to be enabled until I draw a given model. Nothing is dynamically allocated in GLSL, so I have to define in a shader some number of lights to be used, right? So if I want to stick with 8, should I write my general purpose shader to have 8 lights and then use uniforms to tell it how many / which lights to use? Which brings me to another question: should I be concerned with the amount of data I'm allocating in a shader? Recent video cards have hundreds of "stream processors." If I've got a fragment shader being used on some number of fragments in a given triangle, I assume they must each have their own stack to work on. Are read-only variables copied here, or read when needed? My initial goal is to rework my code so that it is virtually identical to the current implementation. What I have in mind is to create my own matrix stack so that I can implement something along the lines of push/popMatrix and apply all my translations, rotations, and scales to this matrix, then provide the matrix to the vertex shader so that it can make very quick vertex translations. Is this approach sound? Edit: My original intention was to ask if there was a tutorial that would explain the bare minimum necessary to jump from fixed-function to using shaders. Thanks!

    Read the article

  • New Information Center - Optimize Performance of FMW 11g

    - by Daniel Mortimer
    Following on the heels of the recently published - "Reviewing Security for FMW 11g" Information Center, we are pleased to announce the publication of Information Center: Optimizing Performance of Oracle Fusion Middleware 11g [ID 1469617.2] Screenshot of ID 1469617.2 We are in the process of making further tweaks and changes to improve the other ** "Oracle Fusion Middleware 11g" Information Centers. So watch this space! ** You can navigate to these other Information Centers via the menu found on the left hand side of the "Optimize Performance" Information Center.

    Read the article

  • OpenGL doesn't draw (3.3+) [on hold]

    - by Dhiego Magalhães
    Brief: I've been following this tutorial about OpenGL for 2 days, and I still can't have a triangle drawn, so I'm asking for help here. The tutorial is turned to OpenGL version 3.3 programing, using vertex arrays, buffers, etc. The libraries are: GLFW3 and GLEW, and I setted them by myself. The screen keeps black all the time. Full code: link here (It's just like a Hello World opengl program) Further Details: I get no errors at all. I downloaded a software to test my video card, and it supports OpenGL 4.1+ Standard OpenGL code for drawing (from earlier version) such as this one works normally. I'm using Microsoft Visual Studio 10.0 I presume all the OpenGL implementation was dune right: I added Additional Dependences to the linker as glew32.lib, opengl32.lib, glfw3.lib. The glew.dll was placed at SysWOW64 - because I'm running window 64bits, and glew is 32. Notes: I've been working hard to find out what this is, but I can't find. I would appreciate if anyone could test this code for me, so I can know if I implemented something wrong, and that its not my code.

    Read the article

  • How to make image bigger than the screen to be slideable in the screen in monogame for windows phone 8?

    - by Moses Aprico
    (Idk if my title is correct, because when I google it, there is no related result I guess) I am not sure how to explain it correctly, but I am making a plain 2D, tile based, tactic game in windows phone 8 using monogame. I want to make my map is "slideable". With "slidable" I mean I can draw larger images (in total) than my screen and then slide it so I can view a certain area of the drawn images Example : I have a screen which dimension is 1280x720. I have a 1500x1500px image, which consists of 15 tiles, which is 100x100px each, which each tiles is redrawn each times the "Draw" is called. If the image is larger than the screen, the displayed area will be trimmed and of course, making a 220x780px area that is unseenable. The only way to see all of it is through "sliding" the screen around, so I can see all the area. My question is : How to make that happen? Because in default, the screen is unslideable and the image remains trimmed. Sorry if my question and explanation is not clear enough. Clarify it as much as you like. Thank you.

    Read the article

  • Is it possible that Unity would some day switch back to Mutter?

    - by David
    I remembered that the first Unity was indeed built on Mutter, but later ported to Compiz due to poor performance. I also know Canonical practically incorporated Compiz to work closely for future Unity, so this is getting less likely. But Compiz just seems pretty outdated now that GNOME3/GTK3/Mutter is becoming more mainstreamed, and it is known to deliver some performance issue, but on the other hand Mutter seems pretty good and is still steadily developing now, I'm just wondering if anyone related to the project is still testing and evaluating the possibility of Unity on Mutter? Not that you have to tell me now if you're going to do it or not. I just wanna know if anyone is considering it. Thanks.

    Read the article

  • From the Coalface - 4 - Getting a connection string

    - by TATWORTH
    Creating a connection string by hand is quite difficult, however you create a connection string as follows: 1) Create an empty text file in windows explorer and rename it to X.UDL 2) Double click on it and the datalink provider dialog will appear. 3) Select the provider tab. Find the provider for your data access method and click next. 4) Select your source  5) Test the connection and save it. 6) Open X.UDL with a text editor to see your connections string. You can also look at http://www.connectionstrings.com/ for examples of connection strings.

    Read the article

  • NEW EMEA Hardware Partner Community

    - by Cinzia Mascanzoni
    We are delighted to announce the availability of the EMEA HW partner community. The EMEA Partner Community for Hardware is the place where partners in Europe, Middle East and Africa can share experiences and best practices about selling and implementing Servers, Storage and Solaris based projects. You will also receive first-hand information from Oracle on products, training and tools that can help you better market, sell and implement your projects and services based on Oracle Hardware. If you are an individual  working for an Oracle partner or distributor and your job is selling, implementing or supporting Oracle Servers, Storage and Solaris projects in EMEA then this community is for you. For further information on the EMEA HW partner community and instructions on how to become member please visit: www.oracle.com/partners/goto/hardware-emea

    Read the article

  • Positive reinforcements @ work [closed]

    - by nurne
    I found out that what fuels me to do well at work are positive reinforcements From bosses, colleagues, and customers My current job at a startup is very demanding My boss doesn't have time to give positive reinforcements, and also i'm always behind schedule so maybe i don't deserve positive reinforcements On the other hand i don't get any negative reinforcements, so i guess that as long as this doesn't happen - what i'm doing is ok How is your relationship with bosses colleagues and customers @ work? Do you need positive reinforcements? Do you get them? How do you make them happen? Is there some kind of standard for developers? For hi-tech? Thanks

    Read the article

  • pulseaudio: no microphone configuration

    - by mitsch
    Updated Ubuntu to precise from oneiric on a Dell Inspirion Mini 10 (with an Intel HDA-Soundcard). I can't remember having any issues with the microphone. I didn't need it - I tried ekiga the first time in precise. I couldn't hear any sound in the echo test of ekiga, so I switched to the "System Preferences" and looked for the microphone to boost it. Surprisingly, the microphone input was greyed out - I couldn't mute or unmute, i couldn't even move the volume-slider. On the other hand: I could change the microphone-setting on console with alsamixer, so don't worry about that… :) I just wanted to ask, how to get pulseaudio back to the known, comfortable behaviour. Some newbies won't know the trick to use alsamixer… My soundcard (output of lspci): 00:1b.0 Audio device: Intel Corporation N10/ICH 7 Family High Definition Audio Controller (rev 02) Greets!

    Read the article

  • Clouds, Clouds, Clouds Everywhere, Not a Drop of Rain!

    - by sxkumar
    At the recently concluded Oracle OpenWorld 2012, the center of discussion was clearly Cloud. Over the five action packed days, I got to meet a large number of customers and most of them had serious interest in all things cloud.  Public Cloud - particularly the Oracle Cloud - clearly got a lot of attention and interest. I think the use cases and the value proposition for public cloud is pretty straight forward. However, when it comes to private cloud, there were some interesting revelations.  Well, I shouldn’t really call them revelations since they are pretty consistent with what I have heard from customers at other conferences as well as during 1:1 interactions. While the interest in enterprise private cloud remains to be very high, only a handful of enterprises have truly embarked on a journey to create what the purists would call true private cloud - with capabilities such as self-service and chargeback/show back. For a large majority, today's reality is simply consolidation and virtualization - and they are quite far off from creating an agile, self-service and transparent IT infrastructure which is what the enterprise cloud is all about.  Even a handful of those who have actually implemented a close-to-real enterprise private cloud have taken an infrastructure centric approach and are seeing only limited business upside. Quite a few were frank enough to admit that chargeback and self-service isn’t something that they see an immediate need for.  This is in quite contrast to the picture being painted by all those surveys out there that show a large number of enterprises having already implemented an enterprise private cloud.  On the face of it, this seems quite contrary to the observations outlined above. So what exactly is the reality? Well, the reality is that there is undoubtedly a huge amount of interest among enterprises about transforming their legacy IT environment - which is often seen as too rigid, too fragmented, and ultimately too expensive - to something more agile, transparent and business-focused. At the same time however, there is a great deal of confusion among CIOs and architects about how to get there. This isn't very surprising given all the buzz and hype surrounding cloud computing. Every IT vendor claims to have the most unique solution and there isn't a single IT product out there that does not have a cloud angle to it. Add to this the chatter on the blogosphere, it will get even a sane mind spinning.  Consequently, most  enterprises are still struggling to fully understand the concept and value of enterprise private cloud.  Even among those who have chosen to move forward relatively early, quite a few have made their decisions more based on vendor influence/preferences rather than what their businesses actually need.  Clearly, there is a disconnect between the promise of the enterprise private cloud and the current adoption trends.  So what is the way forward?  I certainly do not claim to have all the answers. But here is a perspective that many cloud practitioners have found useful and thus worth sharing. To take a step back, the fundamental premise of the enterprise private cloud is IT transformation. It is the quest to create a more agile, transparent and efficient IT infrastructure that is driven more by business needs rather than constrained by operational and procedural inefficiencies. It is the new way of delivering and consuming IT services - where the IT organizations operate more like enablers of  strategic services rather than just being the gatekeepers of IT resources. In an enterprise private cloud environment, IT organizations are expected to empower the end users via self-service access/control and provide the business stakeholders a transparent view of how the resources are being used, what’s the cost of delivering a given service, how well are the customers being served, etc.  But the most important thing to note here is the enterprise private cloud is not just an IT project, rather it is a business initiative to create an IT setup that is more aligned with the needs of today's dynamic and highly competitive business environment. Surprised? You shouldn’t be. Just remember how the business users have been at the forefront of public cloud adoption within enterprises and private cloud is no exception.   Such a broad-based transformation makes cloud more than a technology initiative. It requires people (organizational) and process changes as well, and these changes are as critical as is the choice of right tools and technology. In my next blog,  I will share how essential it is for enterprise cloud technology to go hand-in hand with process re-engineering and organization changes to unlock true value of  enterprise cloud. I am sharing a short video from my session "Managing your private Cloud" at Oracle OpenWorld 2012. More videos from this session will be posted at the recently introduced Zero to Cloud resource page. Many other experts of Oracle enterprise private cloud solution will join me on this blog "Zero to Cloud"  and share best practices , deployment tips and information on how to plan, build, deploy, monitor, manage , meter and optimize the enterprise private cloud. We look forward to your feedback, suggestions and having an engaging conversion with you on this blog.

    Read the article

  • XNA - Obtaining depth from the scene's render target?

    - by user1423893
    I'm currently rendering my scene to a render target so it can be used for rendering methods such as post processing and order independent transparency. rtScene = new RenderTarget2D( GraphicsDevice, GraphicsDevice.PresentationParameters.BackBufferWidth, GraphicsDevice.PresentationParameters.BackBufferHeight, false, SurfaceFormat.Rgba64, DepthFormat.Depth24Stencil8, // Requires a depth format for objects to be drawn correctly (e.g. wireframe model surrounding model) 0, RenderTargetUsage.PreserveContents ); I am required to use RenderTargetUsage.PreserveContents so that the same render target can be rendered to multiple times, once for each of the draw methods below. DrawBackground DrawDeferred DrawForward DrawTransparent The problem is that DrawTransparent requires a copy of the scene's depth as a texture. Is there any way to obtain this from the scene render target above (rtScene)? I can't have more than one render target with RenderTargetUsage.PreserveContents as this causes problems on hardware such as the XBOX 360, so rendering the depth to a separate render target at the same time as I render the scene isn't possible as far as I can tell. Would I be able to get around this problem by "Ping-Ponging" two render targets (using the more compatible RenderTargetUsage.DiscardContents) and using the result for the depth texture?

    Read the article

  • GLES2.0 3D Android game performance and multi threading the update?

    - by Ofer
    I have profiled my mixed Java\C++ Android game and I got the following result: https://dl.dropbox.com/u/8025882/PompiDev/AndroidProfile.png As you can see, the pink think is a C++ functions that updates the game. It does things like updating the logic but it mostly it generates a "request list" for rendering. The thing is, I generate DrawLists on C++ and then send them to Java to process and draw using GLES2.0. Since then I was able to improve update from 9ms down to about 7ms, but I would like to ask if I would benefit from multi threading the update? As I understand from that diagram is that the function that takes the most time is the one you see it's color on the timeline. So the pink area is taken mostly by update. The other area has MainOpenGL.Handle as it's main contributor(whch is my java function), but since it's not drawn to the top of the diagram I can conclude other things are happening at the same time that use the CPU? Or even GPU stuff that isn't shown in this diagram. I am not sure how the GPU works on this. Does it calculate stuff in parallel to the CPU? Or is it part of the CPU usage as in SoC? I am not sure. Anyway, in case GPU things DO happen in parallel to CPU, then I would guess that if I do this C++ Update in parallel to the thread that makes the OpenGL calls, I might make use of "dead CPU time" due to GPU stalling or maybe have the GPU calls getting processed earlier because it won't have to wait for Update to finish? How do you suggest to improve performance based on that? Thanks.

    Read the article

  • Collision detection between a sprite and rectangle in canvas

    - by Andy
    I'm building a Javascript + canvas game which is essentially a platformer. I have the player all set up and he's running, jumping and falling, but I'm having trouble with the collision detection between the player and blocks (the blocks will essentially be the platforms that the player moves on). The blocks are stored in an array like this: var blockList = [[50, 400, 100, 100]]; And drawn to the canvas using this: this.draw = function() { c.fillRect(blockList[0][0], blockList[0][1], 100, 100); } I'm checking for collisions using something along these lines in the player object: this.update = function() { // Check for collitions with blocks for(var i = 0; i < blockList.length; i++) { if((player.xpos + 34) > blockList[i][0] && player.ypos > blockList[i][1]) { player.xpos = blockList[i][0] - 28; return false; } } // Other code to move the player based on keyboard input etc } The idea is if the player will collide with a block in the next game update (the game uses a main loop running at 60Htz), the function will return false and exit, thus meaning the player won't move. Unfortunately, that only works when the player hits the left side of the block, and I can't work out how to make it so the player stops if it hits any side of the block. I have the properties player.xpos and player.ypos to help here.

    Read the article

  • Bunny Inc. Season 2: Find Specialist Partner Resources for Success

    - by kellsey.ruppel
    You may need an additional hand to improve your IT infrastructure, or advice to evolve existing enterprise applications. Or perhaps you’re seeking revolutionary ideas to refresh online presence. Whatever the case, spotting the right partners’ ecosystem will be a central step to grow your business. Don't be a Hare Inc. company by wasting valuable time sourcing relevant expertise, competencies and proven successes on Oracle's product portfolio on your own. Follow Bunny Inc. in the fourth episode of the saga and discover what our worldwide partner community can do for you thanks to the new Oracle Partner Network Specialized program. 

    Read the article

  • When it's more productive to build your own framework than using an existing one?

    - by Pierre 303
    I would like to know why you decided to build your own framework in your company. By framework, I don't mean few libraries you use often. I mean a specific way of building applications on top of it, with base classes, convention, etc. So why did you built your own framework? How could you justify that to the person that employs you. Have you measure the positive and negative impact of it? Regarding your experiences, did you notice that in some case a company framework produced real benefits, or on the other hand, increased costs of development (learning curve, debugging, maintenance, ...)?

    Read the article

  • How are design-by-contract and property-based testing (QuickCheck) related?

    - by Todd Owen
    Is their only similarity the fact that they are not xUnit (or more precisely, not based on enumerating specific test cases), or is it deeper than that? Property-based testing (using QuickCheck, ScalaCheck, etc) seem well-suited to a functional programming style where side-effects are avoided. On the other hand, Design by Contract (as implemented in Eiffel) is more suited to OOP languages: you can express post-conditions about the effects of methods, not just their return values. But both of them involve testing assertions that are true in general (rather than assertions that should be true for a specific test case). And both can be tested using randomly generated inputs (with QuickCheck this is the only way, whereas with Eiffel I believe it is an optional feature of the AutoTest tool). Is there an umbrella term to encompass both approaches? Or am I imagining a relationship that doesn't really exist.

    Read the article

  • Sorting for 2D Drawing

    - by Nexian
    okie, looked through quite a few similar questions but still feel the need to ask mine specifically (I know, crazy). Anyhoo: I am drawing a game in 2D (isometric) My objects have their own arrays. (i.e. Tiles[], Objects[], Particles[], etc) I want to have a draw[] array to hold anything that will be drawn. Because it is 2D, I assume I must prioritise depth over any other sorting or things will look weird. My game is turn based so Tiles and Objects won't be changing position every frame. However, Particles probably will. So I am thinking I can populate the draw[] array (probably a vector?) with what is on-screen and have it add/remove object, tile & particle references when I pan the screen or when a tile or object is specifically moved. No idea how often I'm going to have to update for particles right now. I want to do this because my game may have many thousands of objects and I want to iterate through as few as possible when drawing. I plan to give each element a depth value to sort by. So, my questions: Does the above method sound like a good way to deal with the actual drawing? What is the most efficient way to sort a vector? Most of the time it wont require efficiency. But for panning the screen it will. And I imagine if I have many particles on screen moving across multiple tiles, it may happen quite often. For reference, my screen will be drawing about 2,800 objects at any one time. When panning, it will be adding/removing about ~200 elements every second, and each new element will need adding in the correct location based on depth.

    Read the article

  • Largest successful JavaScript project? [closed]

    - by 80x24 console
    A common theme in the GWT community is "I wouldn't want to build a project of THAT size using a pure JavaScript library!" What is the largest project that you have successfully delivered with frontend functionality written in JavaScript? (not Java or GWT) Please provide at least a hand-wavy SLOC estimate of the unique JS code (not including libraries, frameworks, toolkits, test code, generated code, server-side processing such as PHP, etc.) that was in the finished product. Note to GWT advocates: Please read the question carefully before answering. I've heard plenty of stories about JS failures and GWT successes, but I'd like to hear some quantified JS successes. Note to mods: This is primarily a business-of-software question, not a tools question. It factors into a real-world business decision.

    Read the article

  • Updating and organizing class diagrams in a growing C++ project

    - by vanna
    I am working on a C++ project that is getting bigger and bigger. I do a lot of UML so it is not really hard to explain my work to co-workers. Lately though I implemented a lot of new features and I gave up updating by hand my Dia UML diagrams. I once used the class diagram of Visual Studio, which is my IDE but didn't get clear results. I need to show my work on a regular basis and I would like to be as clear as possible. Is there any tool that could generate a sort of organized map of my work (namespaces, classes, interactions, etc.) ?

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >