Search Results

Search found 5564 results on 223 pages for 'multi gpu'.

Page 114/223 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • What are the pro/cons of Unity3D as a choice to make games ?

    - by jokoon
    We are doing our school project with Unity3d, since they were using Shiva the previous year (which seems horrible to me), and I wanted to know your point of view for this tool. Pros: multi platform, I even heard Google is going to implement it in Chrome everything you need is here scripting languages makes it a good choice for people who are not programming gurus Cons: multiplayer ? proprietary, you are totally dependent of unity and its limit and can't extend it it's less "making a game from scratch" C++ would have been a cool thing I really think this kind of tool is interesting, but is it worth it to use at school for a project that involves more than 3 programming persons ? What do we really learn in term of programming from using this kind of tool (I'm ok with python and js, but I hate C#) ? We could have use Ogre instead, even if we were learning direct x starting january...

    Read the article

  • Oracle WebCenter Quiz

    - by Michael Snow
    Quiz: How many of the following business necessities can you accomplish with Oracle WebCenter? a) Employee On-boarding b) Policies & Procedures c) Regulatory Compliance d) Sales Enablement Dashboards e) Secure Deal Collaboration f) Document & IP Management g) Accounts Payable h) Records Management i) Claims Processing j) Marketing and Brand Management k) Call Center & HelpDesk l) Contract Management m) Collaborative Content Contribution and Sharing Environment n) Enterprise Application, Desktop and Office integration o) Share Content Across Intranet And Extranets p) Combine Content In Composite Applications q) Subject Matter Expert Location r) Personalize Recommendations of Spaces, Documents, Wikis, Blogs, and Topics s) Collaborative Community Websites t) Marketing Driven Websites u) Strategic Web Experience Management v) Online Engagement Optimization w) Create Targeted Online Experiences x) Manage Interactive Social Experiences y) Optimize Multi-Channel Customer Experiences z) End-User Personalization & Syndication aa) ALL OF THE ABOVE!!!  (HINT: CHOOSE THIS ONE!!) bb) NONE OF THE ABOVE Learn More - Join us for a Webcast   Do More with Oracle WebCenter – Expand Beyond Content Management

    Read the article

  • Great User Group if based near Gloucester + Links from Entity Framework 4.0 session

    - by Eric Nelson
    I had a really fun evening doing “my final” EF 4.0 session last night (26th May 2010) at GL.NET based out of Gloucester (although secretly I made it into a IronRuby and Windows Azure session). They are a great crowd and Jimmy makes for a fantastic host + it is a very nice venue (Symantec offices in Gloucester, lots of parking, good room etc) + free pizza + free SWAG + trip to pub afterwards (the topics were very varied!). What more could you ask for? The next session is June 16th and will be on multi-tenanted ASP.NET MVC and comes highly recommended. Links from my session: Entity Framework 4 Resources http://bit.ly/ef4resources Entity Framework Team Blog http://blogs.msdn.com/adonet Entity Framework Design Blog http://blogs.msdn.com/efdesign/ The must have LINQPad http://www.linqpad.net Entity Framework Profile http://efprof.com/  IronRuby info on my blog http://geekswithblogs.net/iupdateable/category/10076.aspx

    Read the article

  • APIs that deal with logins

    - by Brandon Still
    I have been asked to make a mobile app for a friends website. The website is a Multi level marketing site that sells products and franchises. A client logs in in to the website and can view his or her dashboard ( user can view team members, business volume, commissions, invoices, etc.) The app is supposed to bring the dashboard to user's mobile devices (w/ some added features). The company does not have any APIs that deal with interaction or authentication, and I am new to the whole secure login side of app development. My questions is this, how do I let the users gain access to their information via my app from the secure website when there is no API?

    Read the article

  • Announcing Solaris Technical Track at NLUUG Spring Conference on Operating Systems

    - by user9135656
    The Netherlands Unix Users Group (NLUUG) is hosting a full-day technical Solaris track during its spring 2012 conference. The official announcement page, including registration information can be found at the conference page.This year, the NLUUG spring conference focuses on the base of every computing platform; the Operating System. Hot topics like Cloud Computing and Virtualization; the massive adoption of mobile devices that have their special needs in the OS they run but that at the same time put the challenge of massive scalability onto the internet; the upspring of multi-core and multi-threaded chips..., all these developments cause the Operating System to still be a very interesting area where all kinds of innovations have taken and are taking place.The conference will focus specifically on: Linux, BSD Unix, AIX, Windows and Solaris. The keynote speech will be delivered by John 'maddog' Hall, infamous promotor and supporter of UNIX-based Operating Systems. He will talk the audience through several decades of Operating Systems developments, and share many stories untold so far. To make the conference even more interesting, a variety of talks is offered in 5 parallel tracks, covering new developments in and  also collaboration  between Linux, the BSD's, AIX, Solaris and Windows. The full-day Solaris technical track covers all innovations that have been delivered in Oracle Solaris 11. Deeply technically-skilled presenters will talk on a variety of topics. Each topic will first be introduced at a basic level, enabling visitors to attend to the presentations individually. Attending to the full day will give the audience a comprehensive overview as well as more in-depth understanding of the most important new features in Solaris 11.NLUUG Spring Conference details:* Date and time:        When : April 11 2012        Start: 09:15 (doors open: 8:30)        End  : 17:00, (drinks and snacks served afterwards)* Venue:        Nieuwegein Business Center        Blokhoeve 1             3438 LC Nieuwegein              The Nederlands          Tel     : +31 (0)30 - 602 69 00        Fax     : +31 (0)30 - 602 69 01        Email   : [email protected]        Route   : description - (PDF, Dutch only)* Conference abstracts and speaker info can be found here.* Agenda for the Solaris track: Note: talks will be in English unless marked with 'NL'.1.      Insights to Solaris 11         Joerg Moellenkamp - Solaris Technical Specialist         Oracle Germany2.      Lifecycle management with Oracle Solaris 11         Detlef Drewanz - Solaris Technical Specialist         Oracle Germany3.      Solaris 11 Networking - Crossbow Project        Andrew Gabriel - Solaris Technical Specialist        Oracle UK4.      ZFS: Data Integrity and Security         Darren Moffat - Senior Principal Engineer, Solaris Engineering         Oracle UK5.      Solaris 11 Zones and Immutable Zones (NL)         Casper Dik - Senior Staff Engineer, Software Platforms         Oracle NL6.      Experiencing Solaris 11 (NL)         Patrick Ale - UNIX Technical Specialist         UPC Broadband, NLTalks are 45 minutes each.There will be a "Solaris Meeting point" during the conference where people can meet-up, chat with the speakers and with fellow Solaris enthousiasts, and where live demos or other hands-on experiences can be shared.The official announcement page, including registration information can be found at the conference page on the NLUUG website. This site also has a complete list of all abstracts for all talks.Please register on the NLUUG website.

    Read the article

  • How AlphaBlend Blendstate works in XNA 4 when accumulighting light into a RenderTarget?

    - by cubrman
    I am using a Deferred Rendering engine from Catalin Zima's tutorial: His lighting shader returns the color of the light in the rgb channels and the specular component in the alpha channel. Here is how light gets accumulated: Game.GraphicsDevice.SetRenderTarget(LightRT); Game.GraphicsDevice.Clear(Color.Transparent); Game.GraphicsDevice.BlendState = BlendState.AlphaBlend; // Continuously draw 3d spheres with lighting pixel shader. ... Game.GraphicsDevice.BlendState = BlendState.Opaque; MSDN states that AlphaBlend field of the BlendState class uses the next formula for alphablending: (source × Blend.SourceAlpha) + (destination × Blend.InvSourceAlpha), where "source" is the color of the pixel returned by the shader and "destination" is the color of the pixel in the rendertarget. My question is why do my colors are accumulated correctly in the Light rendertarget even when the new pixels' alphas equal zero? As a quick sanity check I ran the following code in the light's pixel shader: float specularLight = 0; float4 light4 = attenuation * lightIntensity * float4(diffuseLight.rgb,specularLight); if (light4.a == 0) light4 = 0; return light4; This prevents lighting from getting accumulated and, subsequently, drawn on the screen. But when I do the following: float specularLight = 0; float4 light4 = attenuation * lightIntensity * float4(diffuseLight.rgb,specularLight); return light4; The light is accumulated and drawn exactly where it needs to be. What am I missing? According to the formula above: (source x 0) + (destination x 1) should equal destination, so the "LightRT" rendertarget must not change when I draw light spheres into it! It feels like the GPU is using the Additive blend instead: (source × Blend.One) + (destination × Blend.One)

    Read the article

  • How AlphaBlend Blendstate works in XNA when accumulighting light into a RenderTarget?

    - by cubrman
    I am using a Deferred Rendering engine from Catalin Zima's tutorial: His lighting shader returns the color of the light in the rgb channels and the specular component in the alpha channel. Here is how light gets accumulated: Game.GraphicsDevice.SetRenderTarget(LightRT); Game.GraphicsDevice.Clear(Color.Transparent); Game.GraphicsDevice.BlendState = BlendState.AlphaBlend; // Continuously draw 3d spheres with lighting pixel shader. ... Game.GraphicsDevice.BlendState = BlendState.Opaque; MSDN states that AlphaBlend field of the BlendState class uses the next formula for alphablending: (source × Blend.SourceAlpha) + (destination × Blend.InvSourceAlpha), where "source" is the color of the pixel returned by the shader and "destination" is the color of the pixel in the rendertarget. My question is why do my colors are accumulated correctly in the Light rendertarget even when the new pixels' alphas equal zero? As a quick sanity check I ran the following code in the light's pixel shader: float specularLight = 0; float4 light4 = attenuation * lightIntensity * float4(diffuseLight.rgb,specularLight); if (light4.a == 0) light4 = 0; return light4; This prevents lighting from getting accumulated and, subsequently, drawn on the screen. But when I do the following: float specularLight = 0; float4 light4 = attenuation * lightIntensity * float4(diffuseLight.rgb,specularLight); return light4; The light is accumulated and drawn exactly where it needs to be. What am I missing? According to the formula above: (source x 0) + (destination x 1) should equal destination, so the "LightRT" rendertarget must not change when I draw light spheres into it! It feels like the GPU is using the Additive blend instead: (source × Blend.One) + (destination × Blend.One)

    Read the article

  • Raspberry Pi and Java SE: A Platform for the Masses

    - by Jim Connors
    One of the more exciting developments in the embedded systems world has been the announcement and availability of the Raspberry Pi, a very capable computer that is no bigger than a credit card.  At $35 US, initial demand for the device was so significant, that very long back orders quickly ensued. After months of patiently waiting, mine finally arrived.  Those initial growing pains appear to have been fixed, so availability now should be much more reasonable. At a very high level, here are some of the important specs: Broadcom BCM2835 System on a chip (SoC) ARM1176JZFS, with floating point, running at 700MHz Videocore 4 GPU capable of BluRay quality playback 256Mb RAM 2 USB ports and Ethernet Boots from SD card Linux distributions (e.g. Debian) available So what's taking place taking place with respect to the Java platform and Raspberry Pi? A Java SE Embedded binary suitable for the Raspberry Pi is available for download (Arm v6/7) here.  Note, this is based on the armel architecture, a variety of Arm designed to support floating point through a compatibility library that operates on more platforms, but can hamper performance.  In order to use this Java SE binary, select the available Debian distribution for your Raspberry Pi. The more recent Raspbian distribution is based on the armhf (hard float) architecture, which provides for more efficient hardware-based floating point operations.  However armhf is not binary compatible with armel.  As of the writing of this blog, Java SE Embedded binaries are not yet publicly available for the armhf-based Raspbian distro, but as mentioned in Henrik Stahl's blog, an armhf release is in the works. As demonstrated at the just-completed JavaOne 2012 San Francisco event, the graphics processing unit inside the Raspberry Pi is very capable indeed, and makes for an excellent candidate for JavaFX.  As such, plans also call for a Pi-optimized version of JavaFX in a future release too. A thriving community around the Raspberry Pi has developed at light speed, and as evidenced by the packed attendance at Pi-specific sessions at Java One 2012, the interest in Java for this platform is following suit. So stay tuned for more developments...

    Read the article

  • geomipmapping using displacement mapping (and glVertexAttribDivisor)

    - by Will
    I wake up with a clear vision, but sadly my laptop card doesn't do displacement mapping nor glVertexAttribDivisor so I can't test it out; I'm left sharing here: With geomipmapping, the grid at any factor is transposable - if you pass in an offset - say as a uniform - you can reuse the same vertex and index array again and again. If you also pass in the offset into the heightmap as a uniform, the vertex shader can do displacement mapping. If the displacement map is mipmapped, you get the advantages of trilinear filtering for distant maps. And, if the scenery is closer, rather than exposing that the you have a world made out of quads, you can use your transposable grid vertex array and indices to do vertex-shader interpolation (fancy splines) to do super-smooth infinite zoom? So I have some questions: does it work? In theory, in practice? does anyone do it? Does this technique have a name? Papers, demos, anything I can look at? does glVertexAttribDivisor mean that you can have a single glMultiDrawElementsEXT or similar approach to draw all your terrain tiles in one call rather than setting up the uniforms and emitting each tile? Would this offer any noticeable gains? does a heightmap that is GL_LUMINANCE take just one byte per pixel(=vertex)? (On mainstream cards, obviously. Does storage vary in practice?) Does going to the effort of reusing the same vertices and indices mean that you can basically fill the GPU RAM with heightmap and not a lot else, giving you either bigger landscapes or more detailed landscapes/meshes for the same bang? is mipmapping the displacement map going to work? On future cards? Is it going to introduce unsurmountable inaccuracies if it is enabled?

    Read the article

  • nVidia Settings: Overriding anti-aliasing causes delay

    - by Kalle Elmér
    I'm using Google Sketchup on ubuntu 12.04 with Wine 1.4. It works flawlessly out of the box, but anti-aliasing is causing some problems. I can override anti-aliasing settings using the nVidia X Server Settings utility, which results in a great-looking image. However, the view doesn't seem to update properly. It's a bit hard to explain, but if I do something (e.g. zooming) the changes won't appear in the view until I take another action. in other words, there seems to be a delay of one "action". Take this example. The mouse wheel is moved one notch to zoom in one step. Nothing happens. An object is selected by clicking. The new zoom is rendered but the selection box doesn't appear. An empty area is clicked. The selection box appears. Is there something that I can do to solve the problem? Could I force the GPU to redraw that view with a certain interval, or is there some other solution? I really like anti-aliasing, but it's hard to use when drawing stuff.

    Read the article

  • Dev Lead Job opening on my team

    My product unit (Parallel Developer Tools) is hiring a developer lead here in Redmond. This position is specifically on the debugger feature team that I "Program Manage".So, if you have what it takes and don't mind working with me every single day, click on the link below to read more and apply. You can also send me your resume and I'll make sure it gets to the right place and that you get a prompt response.There is a very long job description on the Microsoft careers site under job id 707388.Here is an excerpt from the middle (emphasis mine):"...We are in search of a talented and innovative senior lead software design engineer to own development of the debugging tools for data parallelism (including GP-GPU) and HPC Clusters being built by our team.To be successful, you need to be able to guide careers, design and architect well, communicate and share the best development practices, collaborate with your peers, contribute to the vision, and code significant portions of the solution. We want to hear from you if you're passionate about making your mark in the parallel development space, improving people, and building world-class tools."Responsibilities include:Managing a team of senior and junior developersDesign and coding high-quality software..."For the full background story, requirements, qualifications and responsibilities please visit the official page. Comments about this post welcome at the original blog.

    Read the article

  • Alienware M17x R3: Possible downclock

    - by Ywen
    I installed recently Kubuntu 11.10 32 bits (had graphics driver issues, wanted to try on 32 bits version) on my new Alienware M17x, with a Core i7-2670QM CPU. Cores are supposed to be clocked at 2.2 GHz, however the output of $ cat /proc/cpuinfo | grep -i "hz" gives me: model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 If useful, the AC adapter is plugged in (yet the ouput is the same when the computer is powered only by the battery) and I have Firefox and Eclipse running. Does /proc/cpuinfo reflect a possible automatic downclock made to save power if processor load is low or is this output abnormal? EDIT: Ok, I checked and yes, the ouput does vary in function of the load. I reach 2.2 GHz when needed. But my following problem remains. I was checking my CPU clocking because I experienced poor performances when reading 720p video files on Ubuntu with VLC or mplayer when on battery (and I believe VLC by default only uses CPU, not GPU to decode), whereas I haven't got such problems with VLC on Windows (which made me think it wasn't coming from a BIOS option, plus every option in the BIOS regarding the CPU is turned ON).

    Read the article

  • KOSTENFREIE PRÜFUNGS-GUTSCHEINE - JETZT SICHERN!

    - by michaela.seika(at)oracle.com
    Nutzen Sie die Gelegenheit und kommen Sie während der CeBIT zum Oracle Implementation Specialist Prüfungstag für Oracle Applications, Middleware und Hardware. Um Zeit und Ressourcen optimal einzusetzen, haben Sie am 2. März die Möglichkeit die Implementation Specialist Prüfung direkt vor Ort in Hannover abzulegen.Bitte leiten Sie diese Einladung an Ihre entsprechenden Kollegen weiter, um die Prüfung rechtzeitig einzuplanen: www.pearsonvue.com. Voraussetzung für die Teilnahme ist eine gültige OPN Mitgliedschaft. Der Prüfungsort ist bei den Multi Media Berufsbildenden Schulen (MMBSS), Explo Plaza3, 30539 Hannover.Ihre Registrierung für den Implementation Prüfungstag richten Sie bitte an Frau Michaela Seika bis 18. Februar mit folgenden Daten: Firma, Prüfling & Kontaktdaten, Oracle Company-ID, EDU-PIN, Ja/Nein für die Zuteilung eines Prüfungs-Gutscheins.Bitte um Beachtung die kostenfreien Prüfungs-Gutscheine sind nur bis 2. März 2011 gültig. (Pro Unternehmen werden maximal zwei Gutscheine ausgegeben).

    Read the article

  • Radeon HD 6850 VMware 3D Support?

    - by Matt
    I'm a new Ubuntu user (new to all of linux actually). I've installed Ubuntu 11.10 x64 and have been enjoying it, but I wanted to see how it would perform using VMware for small time gaming since I find dual booting too much of a nuisance to even bother using Ubuntu at all (sorry!). I have an Asus EAH6850 DirectCU Radeon HD 6850 graphics card and I've installed the additional ATI/AMD proprietary FGLRX graphics driver, but when I open a Windows XP 32bit machine I installed through VMware, I get this message: "The GPU driver currently installed on this host may cause issues with VMware products. If you notice any issues please disable the 3D support in the affected virtual machines." I still have 3D capabilities in the VM but they are very very choppy even running the DX tests (the spinning cube). I've seen people on youtube and other forums saying that since the new 3D acceleration in VMware 8 gaming is very possible through VMs (and I've seen them running the DX tests with the spinning cube very smoothly). I'm wondering if my graphics card isn't fully supported or if I have installed it wrong. Also when I check system info (on the host Ubuntu machine) it says "Graphics VESA:BARTS" Should my Radeon HD 6850 be showing up there? The rest of my basic system info i5 2500k 8GB 1600MHz memory Guest is running with access to all 4 cores of processor and 3gb memory assigned.

    Read the article

  • OpenGL profiling with AMD PerfStudio 2

    - by Aurus
    I'm rendering just a really small amount of polygons for my UI but however I still tried to increase the FPS. In the end I removed redundant calls which increased the FPS. I really don't want to lose FPS for nothing so I keep looking for more improvements. The first thing I noticed is the "huge" time where no calls are made before SwapBuffer (the black one). Well I know that OpenGL works asynchronous so SwapBuffer has to wait until everything is done. But shouldn't PerfStudio mark this time also as black ? Correct me If I am wrong. The second thing I noticed is that some glUniform2f calls just take longer (the brown ones). I mean they should all upload 2floats to the GPU how can the time be so different from call to call. The program isn't even changed or something like that. I also tried to look at other programs like gDebugger or CodeXL but they often crashed and they show less statistics (only # of calls or redundant calls etc.) EDIT: I also realized that the draw calls also have different durations, which was obvious for me but sometimes drawing more vertices is faster than drawing less vertices.

    Read the article

  • Registration Open Now! Virtual Developer Day: Oracle ADF Development

    - by Greg Jensen
    Is your organization looking at developing Web or Mobile application based upon the Oracle platform?  Oracle is offering a virtual event for Developer Leads, Managers and Architects to learn more about developing Web, Mobile and beyond based on Oracle applications. This event will provide sessions that range from introductory to deep dive covering Oracle's strategic framework for developing multi-channel enterprise applications for the Oracle platforms. Multiple tracks cover every interest and every level and include live online Q&A chats with Oracle's technical staff.   For Registration and Information, please follow the link HERE Sign up for one of the following events below Americas - Tuesday - November 19th / 9am to 1pm PDT / 12pm to 4pm EDT / 1pm to 5pm BRT APAC - Thursday - November 21st / 10am - 1:30pm IST (India) / 12:30pm - 4pm SGT (Singapore) / 3:30pm -7pm AESDT EMEA - Tuesday - November 26th / 9am - 1pm GMT / 1pm - 5pm GST / 2:30pm -6:30pm IST

    Read the article

  • How can I set my resolution to 1280x1024 on an Acer Aspire Revo 3700?

    - by torbengb
    I've just set up a new nettop computer (Acer Aspire Revo 3700: CPU:Atom D525, GPU:Nvidia ION2). I've just made a clean install of Ubuntu 10.10 using the standard USB pendrive method. Almost everything works OK, but the graphics are not OK: the recommended Nvidia driver is activated but the monitor is not detected, so the resolution is wrong. How can I make Ubuntu detect my monitor? How can I get the proper resolution (1280x1024) in Ubuntu? I know that my monitor is not a CRT but an LCD: it's a BenQ, model T905, with 1280x1024 resolution at 60Hz, connected via a normal VGA cable. DVI or HDMI is not an option. When I go to SystemPrefsMonitors, I get: It appears that your graphics driver does not support the necessary extensions to use this tool. Do you want to use your graphics driver vendor's tool instead? YES NO If I say NO then I get a window: or for YES I get this: In both cases I don't see that I can fix this problem. The main reason for getting this new computer was that I was sick of having graphics problems on the old one with a very ugly solution that didn't give me hardware support - but at least I got the resultion. Why is this so difficult... sigh!

    Read the article

  • Ubuntu 13.10 AMD/ATI proprietary driver slow boot time, lengthy login/logout delays

    - by NahsiN
    Ubuntu 13.10 is causing me major headaches with my AMD/ATI HD 5770 GPU. Below is a list of problems I am currently encountering. 1) The boot time is extended by at least 25s after installing catalyst 13.4. Using open source radeon drivers, my boot time till the login screen is ~10s. With catalyst 13.4 installed, the boot time increases to ~35s. This was not the case in Ubuntu 13.04, 12.10 or 12.04. I have done the driver installation manually (instructions from wiki.cchtml.com) and using software center and there is no difference. I have not tried the catalyst 13.8 beta driver. 2) After manual installation of catalyst 13.4, I get stuck at a black screen after logging in. I have to purge fglrx to resolve the problem. I tried sudo amdconfig --initial -f but it didn't help. 3) The delay between logging in and unity being displayed is ~10-15s for BOTH open source and proprietary drivers. During the delay, it's just a black screen. Whenever I logout, there is again a ~10-15s delay with the login screen appearing stuck before lightdm allows me to enter my password again. This is ridiculous! Yes, I could stick with open source radeon drivers but I would like to install Steam and play my Valve collection on the machine. Is anybody else encountering similar issues?

    Read the article

  • how to upload & preview multiple images at single input and store in to php mysql [closed]

    - by Nilesh Sonawane
    This is nilesh , i am newcomer in this field , i need the script for when i click the upload button then uploaded images it should preview and store into db like wise i want to upload 10 images at same page using php mysql . #div { border:3px dashed #CCC; width:500px; min-height:100px; height:auto; text-align:center: } Multi-Images Uploader '.$f.''; } } } ? </div> <br> <font color='#3d3d3d' size='small'>By: Ahmed Hussein</font> this script select multiple images and then uplod , but i need to upload at a time only one image which preview and store into database like wise min 10 image user can upload .......

    Read the article

  • C++ Numerical Recipes &ndash; A New Adventure!

    - by JoshReuben
    I am about to embark on a great journey – over the next 6 weeks I plan to read through C++ Numerical Recipes 3rd edition http://amzn.to/YtdpkS I'll be reading this with an eye to C++ AMP, thinking about implementing the suitable subset (non-recursive, additive, commutative) to run on the GPU. APIs supporting HPC, GPGPU or MapReduce are all useful – providing you have the ability to choose the correct algorithm to leverage on them. I really think this is the most fascinating area of programming – a lot more exciting than LOB CRUD !!! When you think about it , everything is a function – we categorize & we extrapolate. As abstractions get higher & less leaky, sooner or later information systems programming will become a non-programmer task – you will be using WYSIWYG designers to build: GUIs MVVM service mapping & virtualization workflows ORM Entity relations In the data source SharePoint / LightSwitch are not there yet, but every iteration gets closer. For information workers, managed code is a race to the bottom. As MS futures are a bit shaky right now, the provider agnostic nature & higher barriers of entry of both C++ & Numerical Analysis seem like a rational choice to me. Its also fascinating – stepping outside the box. This is not the first time I've delved into numerical analysis. 6 months ago I read Numerical methods with Applications, which can be found for free online: http://nm.mathforcollege.com/ 2 years ago I learned the .NET Extreme Optimization library www.extremeoptimization.com – not bad 2.5 years ago I read Schaums Numerical Analysis book http://amzn.to/V5yuLI - not an easy read, as topics jump back & forth across chapters: 3 years ago I read Practical Numerical Methods with C# http://amzn.to/V5yCL9 (which is a toy learning language for this kind of stuff) I also read through AI a Modern Approach 3rd edition END to END http://amzn.to/V5yQSp - this took me a few years but was the most rewarding experience. I'll post progress updates – see you on the other side !

    Read the article

  • What calls trigger a new batch?

    - by sebf
    I am finding my project is starting to show performance degradation and I need to optimize it. The answer to my previous question and this presentation from NVidia have helped greatly in understanding the performance characteristics of code using the GPU but there are a couple of things that aren't clear that I need to know to optimize my drawing. Specifically, what calls make the distinction between batches. I know that any state changes cause a new batch, so that includes: Render State Changes Buffer Changes Shader Changes Render Target Changes Correct? What else counts as a 'state change'? Does each Draw**Primitive() call constitute a new batch? Even if I were to issue the same call twice, with no state changes, or call it once on on part of the buffer, then again on another? If I were to update a buffer, but not change the bindings, would that be a new batch? That presentation and a DX9 page suggest using all of the texture slots available, which I take to mean loading multiple objects in 'parallel' by mapping their buffers/shaders/textures to slots 1-16. But I am not sure how this works - surely to do this you would need to change the buffer binding and that would count as a state change? (or is it a case of you do but it saves 16 calls so its OK?)

    Read the article

  • why is glVertexAttribDivisor crashing?

    - by 2am
    I am trying to render some trees with instancing. This is rather weird, but before sleeping yesterday night, I checked the code, and it was in a running state, when I got up this morning, it is crashing when I am calling glVertexAttribDivisor I haven't changed any code since yesterday. Here is how I am sending data to GPU for instancing. glGenBuffers(1, &iVBO); glBindBuffer(GL_ARRAY_BUFFER, iVBO); glBufferData(GL_ARRAY_BUFFER, (ml_instance->i_positions.size()*sizeof(glm::vec4)) , NULL, GL_STATIC_DRAW); glBufferSubData(GL_ARRAY_BUFFER, 0, (ml_instance->i_positions.size()*sizeof(glm::vec4)), &ml_instance->i_positions[0]); And then in vertex specification-- glBindBuffer(GL_ARRAY_BUFFER, iVBO); glVertexAttribPointer(i_positions, 4, GL_FLOAT, GL_FALSE, 0, 0); glEnableVertexAttribArray(i_positions); glVertexAttribDivisor(i_positions,1); // **THIS IS WHERE THE PROGRAM CRASHES** glDrawElementsInstanced(GL_TRIANGLES, indices.size(), GL_UNSIGNED_INT, 0,TREES_INSTANCE_COUNT); I have checked ml_instance->i_positions, it has all the data that needs to render. I have checked the value of i_positions in vertex shader, it is the same as whatever I have defined there. I am little out of ideas here, everything looks pretty much fine. What am I missing?

    Read the article

  • Low graphic mode after switching to fglrx drivers

    - by MrKenkadze27
    I have another problem on my laptop after trying to fix another issue. So, because of this issue, I wanted to switch to fglrx to fix it and after restart, I got this screen: I went back to terminal and I got rid of this problem by "purge"-ing the fglrx driver and removing it, going back to problem number one. I tried lot of methods to fix this such as this ones but they ether switched back to open-source drivers or didn't helped at all. So would anyone like to help? maybe give some commends to try? My Laptop has 2 GPU, AMD Radeon HD 7650M and Intel(R) HD Graphics 4000. AMD is running always making my laptop too hot. Here's one of the Xorg.5.log file paste, I am sure it will be useful finding my problem. Thanks! Please make answer easy to understand as I am not an expert, this problem is keeping me from being one. Also the AMD driver which can be downloaded from their site doesn't install, it says non compatible graphics card but Ubuntu Software Updater sure installs it.

    Read the article

  • Efficient existing rating system for multiplayer?

    - by Nikolay Kuznetsov
    I would like to add a rating for online version of a board game. In this game there are many game rooms each normally having 3-4 people. So I expect that player's rating adjustments (RA) should depends on Rating of opponents in the game room Number of players in game room and final place of a player Person gets rating increase if he plays more games and more frequently If a person leaves a game room (disconnect) before the game ends he should get punished with a high rating decrease I have found two related questions in here Developing an ELO like point system for a multiplayer gaming site Simplest most effective way to rank and measure player skill in a multi-player environment? Please, let me know what would be the most appropriate existing rating model to refer.

    Read the article

  • Deploying a very simple application

    - by vanna
    I have a very simple working console application written in C++ linked with a light static library. It is just for testing purposes. Now that the coding part is done, I would like to know the process of actually deploying the program. I wrote a very basic CMakeLists.txt that create makefiles or VS projects to build the sources. I also have a program that calls the static library in order to make some google tests. To me, the distribution of this application goes like this : to developpers : the src directory with the CMakeLists.txt file (multi-platform distribution) with a README.txt and an INSTALL.txt to users : the executable and a README.txt git repo : everything mentionned above plus the sources for testing and the gtest external lib A this point : considering the complexity of my application, am I doing it right ? Is there any reference that would formalize this deployment process so I can get better and go further ? Say I would like to add dynamic libraries that can be updated, external libraries like boost : how should I package this to deploy it in a professionnal way ?

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >