Search Results

Search found 28707 results on 1149 pages for 'writing your own'.

Page 145/1149 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • Multiple render targets and gamma correctness in Direct3D9

    - by Mario
    Let's say in a deferred renderer when building your G-Buffer you're going to render texture color, normals, depth and whatever else to your multiple render targets at once. Now if you want to have a gamma-correct rendering pipeline and you use regular sRGB textures as well as rendertargets, you'll need to apply some conversions along the way, because your filtering, sampling and calculations should happen in linear space, not sRGB space. Of course, you could store linear color in your textures and rendertargets, but this might very well introduce bad precision and banding issues. Reading from sRGB textures is easy: just set SRGBTexture = true; in your texture sampler in your HLSL effect code and the hardware does the conversion sRGB-linear for you. Writing to an sRGB rendertarget is theoretically easy, too: just set SRGBWriteEnable = true; in your effect pass in HLSL and your linear colors will be converted to sRGB space automatically. But how does this work with multiple rendertargets? I only want to do these corrections to the color textures and rendertarget, not to the normals, depth, specularity or whatever else I'll be rendering to my G-Buffer. Ok, so I just don't apply SRGBTexture = true; to my non-color textures, but when using SRGBWriteEnable = true; I'll do a gamma correction to all the values I write out to my rendertargets, no matter what I actually store there. I found some info on gamma over at Microsoft: http://msdn.microsoft.com/en-us/library/windows/desktop/bb173460%28v=vs.85%29.aspx For hardware that supports Multiple Render Targets (Direct3D 9) or Multiple-element Textures (Direct3D 9), only the first render target or element is written. If I understand correctly, SRGBWriteEnable should only be applied to the first rendertarget, but according to my tests it doesn't and is used for all rendertargets instead. Now the only alternative seems to be to handle these corrections manually in my shader and only correct the actual color output, but I'm not totally sure, that this'll not have any negative impact on color correctness. E.g. if the GPU does any blending or filtering or multisampling after the Linear-sRGB conversion... Do I even need gamma correction in this case, if I'm just writing texture color without lighting to my rendertarget? As far as I know, I DO need it because of the texture filtering and mip sampling happening in sRGB space instead, if I don't correct for it. Anyway, it'd be interesting to hear other people's solutions or thoughts about this.

    Read the article

  • OWB/ODI Users: Last Chance to Submit and Vote On Sessions for OpenWorld 2010

    - by antonio romero
    Now is the last chance for OWB and ODI users to propose new ETL/DW/DI sessions for OpenWorld! Oracle OpenWorld 2010 "Suggest a Session" lets members of the Oracle Mix community submit and vote on papers/talks for OpenWorld. The most popular session proposals will be included in the conference program. One promising OWB-related topic has already been submitted: Case Study: Real-Time Data Warehousing and Fraud Detection with Oracle 11gR2 Dr. Holger Friedrich and consultants from sumIT AG in Switzerland built a real-time data warehouse and accompanying BI system for real-time online fraud detection with very limited resources and a short schedule. His presentation will cover: How sumIT AG efficiently loads complex data feeds in real time in Oracle 11gR2 using, among others, Advanced Queues and XML DB How they lowered costs and sped up development, by leveraging the DBs development features including Oracle Warehouse Builder How they delivered a production-ready solution in a few short months using only three part-time developers Come vote for this proposal, on Oracle Mix: https://mix.oracle.com/oow10/proposals/10566-case-study-real-time-data-warehousing-and-fraud-detection-with-oracle-11gr2  I have already invited members of the OWB/ODI Linkedin group (with over 1400 members) to come vote on topics like this one and propose their own. If enough of us vote on a few topics, we are sure to get some on the agenda!  And if you have your own topics, using the Suggest-a-Session instructions here: http://wiki.oracle.com/page/Oracle+OpenWorld+2010+Suggest-a-Session If you propose a topic, don't forget to come to Linkedin and promote it! I have already sent the members of the Linkedin group an email announcement about this, and I will send another in a week, with links to all topics submitted. Thanks, all!

    Read the article

  • Survey: Do you write custom SQL CLR procedures/functions/etc

    - by James Luetkehoelter
    I'm quite curious because despite the great capabilities of writing CLR-based stored procedures to off-load those nasty operations TSQL isn't that great at (like iteration, or complex math), I'm continuing to see a wealth of SQL 2008 databases with complex stored procedures and functions which would make great candidates. The in-house skill to create the CLR code exists as well, but there is flat out resistance to use it. In one scenario I was told "Oh, iteration isn't a problem because we've trained...(read more)

    Read the article

  • Design Anti-Patterns - C# - Do you call this a God object?

    - by Reddy S R
    I am writing Portfolio module for my web site and it has 3 components. Gallery Category, Gallery, & Gallery Images. I am doing all the request handling, (creating, reading, updating, other), for the above 3 components in 1 class, Portfolio. DB handling jobs for Portfolio module is done in another file. My question is, even just for request handling purpose, can you do all the operations in 1 class? -Reddy

    Read the article

  • File saving disabled 'Saving has been disabled by system admin'

    - by Gubuntu
    I have coded my own html website recently, and today wished to add a Google calender object to it. I have not put this website on the web because it is for my own personal use and I can't buy a domain. So I just have a folder on my pc that I load the index.html from now and then. As I was saying, today I got an error while trying to save the Google calender object in. I am system admin on my PC, in fact no one else uses but me, except when I have friends round, but for once my PC seems to think I'm some standard account user, because I couldn't save. I thought of clicking close and seeing if it came up with save as, but it didn't, it said 'Are you sure you want to close without saving?' or something along the lines of that, and 'Saving has been disabled by your system admin.' I couldn't do anything. I tried looking at the settings of the file, and it had me as read only in one of the selections, so I changed that to read & write, but to no avail. I did not save as root when I last edited the file, so I don't get what's going on. Help! P.S. This is on Ask Ubuntu not Superuser because it is on my Ubuntu PC and it appears to be a problem with Ubuntu not root or hardware.

    Read the article

  • Hosting multiple sites on a single webapp in tomcat

    - by satish
    Scenario: I have a website - www.mydomain.com. Registered users will be given the choice of getting a permanent url to their account on mydomain.com as a subdomain like (username.mydomain.com) or they can opt to have their own domain like www.userdomain.com. So the user can access his/her account through the subdomain URL or their own hostname and the request should be forwarded to a specific url on mydomain.com. For example: xyz.mydomain.com or www.xyz.com should give the user account from www.mydomain.com/webapp/account?id=xyz. The user should be completely unaware about where the content is coming from. Setup: My website is running as a webapp in tomcat 5.5.28 with apache as the web server. I am using a VPS which means I have control over all the configuration files (apache, tomcat and dns server). Can you tell me what are the configurations needed to achieve the above scenario??

    Read the article

  • Is there a way to make NoScript always allow .pdf files?

    - by Ben
    I'm using Firefox with NoScript to stop the bad stuff. I've also told Acrobat Reader to load .pdf files in it's own window instead of inside the browser (because sometimes it locks up, and then I would have to restart the browser). However, whenever I come across a .pdf file, I always get a new tab completely covered by the NoScript box. Then, I can click anywhere in that page, and NoScript asks me if I'm sure I want to allow it. Then, Acrobat Reader is launched in its own window, but the Firefox tab remains, and I have to close it. It seems like NoScript is getting in the way of Acrobat's attempt to just open the file without making a new tab. Is there a way to tell NoScript to always allow .pdf files (Or any other suggestion to make that annoying blank tab go away by itself)?

    Read the article

  • Is it bad practice for services to share a database in SOA?

    - by Paul T Davies
    I have recently been reading Hohpe and Woolf's Enterprise Integration Patterns, some of Thomas Erl's books on SOA and watching various videos and podcasts by Udi Dahan et al. on CQRS and Event Driven systems. Systems in my place of work suffer from high coupling. Although each system theoretically has its own database, there is a lot of joining between them. In practice this means there is one huge database that all systems use. For example, there is one table of customer data. Much of what I've read seems to suggest denormalising data so that each system uses only its database, and any updates to one system are propagated to all the others using messaging. I thought this was one of the ways of enforcing the boundaries in SOA - each service should have its own database, but then I read this: http://stackoverflow.com/questions/4019902/soa-joining-data-across-multiple-services and it suggests this is the wrong thing to do. Segregating the databases does seem like a good way of decoupling systems, but now I'm a bit confused. Is this a good route to take? Is it ever recommended that you should segregate a database on, say an SOA service, an DDD Bounded context, an application, etc?

    Read the article

  • One Mac computer with multiple network connections?

    - by Kyle Lowry
    I have a Mac (OS X 10.5) that I would like to connect to a dedicated/isolated Internet connection (one that is not connected to the LAN), and a LAN. The LAN is set up with its own, separate, Internet connection which is shared by several dozen computers (and is quite slow). I want to set it up so that the Mac uses its own dedicated Internet connection (on a different account with a different company) for its Internet access, but can still access the local area network as well. How can I configure the Mac & the network to allow this?

    Read the article

  • Sharing Windows Store apps between accounts

    - by Klas Mellbourn
    In Windows 8 it seems natural to me that each person in a family has their own Microsoft Account with which they log in. If you pay for an app on the Windows Store, you can install that same app on several computers using the same Microsoft Account. Good. However, if several persons, in this case my children, each have their own account on the same computer, they do not get access to apps bought on a sibling's account, even if the app has been installed on the same computer. Bad. (Compare this to iOS where you are allowed have several iPhones with different iCloud-connected accounts but all using the same iTunes App Store account, which is perfect for a family where all can then use the same app which was bought just once) Is there any way to share apps between Microsoft Accounts (e.g. members of the same family)? Alternatively, is there a way to run apps that are installed on a computer when you are logged in with a Microsoft Account different than the one used when installing the app?

    Read the article

  • Nice network diagram editor?

    - by Nicolas Raoul
    Writing a commercial proposal, I want to create a nice graphic showing the clients the architecture I thought of for their IT network, with servers, network connections, firewall, load-balancing, etc. For years I have been using dia, but I am tired of it because: the results are not satisfying, very few network elements are available, and each element's graphic representation is really ugly. Question: How to create nice network diagrams? If a better set of elements was available for dia, that would be a solution.

    Read the article

  • What is the technical reason that so many social media sites don't allow you to edit your text?

    - by Edward Tanguay
    A common complaint I hear about Facebook, Twitter, Ning and other social sites is that once a comment or post is made, it can't be edited. I think this goes against one of the key goals of user experience: giving the user agency, or the ability to control what he does in the software. Even on Stackexchange sites, you can only edit the comments for a certain amount of time. Is the inability for so many web apps to not allow users to edit their writing a technical shortcoming or a "feature by design"?

    Read the article

  • How to use shared_ptr for COM interface pointers

    - by Seefer
    I've been reading about various usage advice relating to the new c++ standard smart pointers unique_ptr, shared_ptr and weak_ptr and generally 'grok' what they are about when I'm writing my own code that declares and consumes them. However, all the discussions I've read seem restricted to this simple usage situation where the programmer is using smart in his/her own code, with no real discussion on techniques when having to work with libraries that expect raw pointers or other types of 'smart pointers' such as COM interface pointers. Specifically I'm learning my way through C++ by attempting to get a standard Win32 real-time game loop up and running that uses Direct2D & DirectWrite to render text to the display showing frames per second. My first task with Direct2D is in creating a Direct2D Factory object with the following code from the Direct2D examples on MSDN: ID2D1Factory* pD2DFactory = nullptr; HRESULT hr = D2D1CreateFactory(D2D1_FACTORY_TYPE_SINGLE_THREADED, &pD2DFactory); pD2DFactory is obviously an 'out' parameter and it's here where I become uncertain how to make use of smart pointers in this context, if indeed it's possible. My inexperienced C++ mind tells me I have two problems: With pD2DFactory being a COM interface pointer type, how would smart_ptr work with the Add() / Release() member functions for a COM object instance? Are smart pointers able to be passed to functions in situations where the function is using an 'out' pointer parameter technique? I did experiment with the alternative of using _com_ptr_t in the comip.h header file to help with pointer lifetime management and declared the pD2DFactory pointer with the following code: _com_ptr_t<_com_IIID<pD2DFactory, &__uuidof(pD2DFactory)>> pD2DFactory = nullptr; and it appears to work so far but, as you can see, the syntax is cumbersome :) So, I was wondering if any C++ gurus here could confirm whether smart pointers are able to help in cases like this and provide examples of usage, or point me to more in-depth discussions of smart pointer usage when needing to work with other code libraries that know nothing of them. Or is it simply a case of my trying to use the wrong tool for the job? :)

    Read the article

  • Moving from building internal applications as WPF to ASP NET MVC?

    - by stuartmclark
    I have worked on quite a few internal applications for my work and I have always defaulted to using WPF, but I am considering re-writing existing ones into a web app. This is so that anyone in my company can use it without having to download anything from the network. I am just wondering if this is the way forward with any development of new internal applications? So, should I stop using WPF and start using ASP.NET MVC for internal applications that a lot of people need to use?

    Read the article

  • Comparing the Performance of Visual Studio's Web Reference to a Custom Class

    As developers, we all make assumptions when programming. Perhaps the biggest assumption we make is that those libraries and tools that ship with the .NET Framework are the best way to accomplish a given task. For example, most developers assume that using ASP.NET's Membership system is the best way to manage user accounts in a website (rather than rolling your own user account store). Similarly, creating a Web Reference to communicate with a web service generates markup that auto-creates a proxy class, which handles the low-level details of invoking the web service, serializing parameters, and so on. Recently a client made us question one of our fundamental assumptions about the .NET Framework and Web Services by asking, "Why should we use proxy class created by Visual Studio to connect to a web service?" In this particular project we were calling a web service to retrieve data, which was then sorted, formatted slightly and displayed in a web page. The client hypothesized that it would be more efficient to invoke the web service directly via the HttpWebRequest class, retrieve the XML output, populate an XmlDocument object, then use XSLT to output the result to HTML. Surely that would be faster than using Visual Studio's auto-generated proxy class, right? Prior to this request, we had never considered rolling our own proxy class; we had always taken advantage of the proxy classes Visual Studio auto-generated for us. Could these auto-generated proxy classes be inefficient? Would retrieving and parsing the web service's XML directly be more efficient? The only way to know for sure was to test my client's hypothesis. Read More >

    Read the article

  • Can two users both control a third machine simultaneously using Synergy?

    - by Reason
    I've been a Synergy user for some time now, as I use a PC on the left side of my Mac. My girlfriend and I both have our desks on each side of the other, and we'd like to know if it were possible for the both of us to control the PC in the middle, with our own separate mouse & keyboards. Here's a crude drawing of our setup (1) her pc (2) my pc (3) my mac Currently, 3 is running a synergy server, and 2 is running the client. But like I said, I'm wondering if there's a way for 1 & 3 to both control 2 with their own mouse and keyboard. I'd ~love~ to have it set up where we could go even farther, and have both of our mice & keyboards able to control all 3 computers at the same time, for moments when we need to click or press keys for each other. But that seems a little too much to ask! Any thoughts?

    Read the article

  • Adding interactive graphical elements to text-based browser game with HTML5

    - by st9
    I'm re-writing an old virtual world/browser based game. It is text and HTML form based with some static graphics. The client is HTML and JS. I want to introduce some interactive graphical elements to certain parts of the game, for example a 'customise character' page, with hooks to server side and local data storage. I want to use HTML5/JS, what is the best approach to designing the web-site? For example could I use Boilerplate and then embed these interactive elements in the page? Thanks

    Read the article

  • How to get multiple open-source projects to use a standard way of doing something.

    - by Marco
    Problem In the last couple weeks, I've used 3 different "repository" tools (listed in alphabetical order): gradle ivy maven I'm calling them "repository" tools because I've also used sbt -- which fortunately uses ivy to manage it's cache or local repository. Each of these tools will create it's own repository. The defaults are: ~/.m2/repository for maven ~/.gradle/cache ~/.ivy2/cache Why can't they all use the same cache? Goal I'd like to change the world so that all three build tools could use the same cache. I'm looking for advice about issues I'm likely to run into and smart ways to get around them. By "use the same cache", I do not mean "retrieve from another build tool's cache". I mean "retrieve from and store in another build tool's cache". While I could go ahead and submit issues to the three projects, I know from experience (as a developer on an open source project), that if you want something done, you're best off getting it done yourself. Also, it seems like I need to get all 3 communities on board to some degree. What is the recommended approach for getting this kind of thing done? How do I approach the different communities? Do I work on patches for the 3 different projects, or would it be better off to create my own "interface" project that deals with these issues and have the 3 tools interface with that? Is this a standards question that I need to address on that front? Lastly, if I'm missing something and this is possible (in an globally configurable fashion), then please let me know.

    Read the article

  • Is it important for reflection-based serialization maintain consistent field ordering?

    - by Matchlighter
    I just finished writing a packet builder that dynamically loads data into a data stream for eventual network transmission. Each builder operates by finding fields in a given class (and its superclasses) that are marked with a @data annotation. When I finishing my implementation, I remembered that getFields() does not return results in any specific order. Should reflection-based methods for serializing arbitrary data (like my packets) attempt to preserve a specific field ordering (such as alphabetical), and if so, how?

    Read the article

  • QAliber

    - by csharp-source.net
    QAliber includes 2 projects: a Visual Studio plug-in and Test Builder + Runner as execute framework. Visual Studio plug-in help writing automatic tests over GUI with control browser and record/play capabilities (but not only, since this project incorporate into development solution API testing is easy to do) The Test Builder is a framework for creating a scenario by simply drag and drop of created building blocks. It already provide big repository of test blocks performing most tasks without coding.

    Read the article

  • TypeScript first impressions

    - by Bertrand Le Roy
    Anders published a video of his new project today, which aims at creating a superset of JavaScript, that compiles down to regular current JavaScript. Anders is a tremendously clever guy, and it always shows in his work. There is much to like in the enterprise (good code completion, refactoring and adoption of the module pattern instead of namespaces to name three), but a few things made me rise an eyebrow. First, there is no mention of CoffeeScript or Dart, but he does talk briefly about Script# and GWT. This is probably because the target audience seems to be the same as the audience for the latter two, i.e. developers who are more comfortable with statically-typed languages such as C# and Java than dynamic languages such as JavaScript. I don’t think he’s aiming at JavaScript developers. Classes and interfaces, although well executed, are not especially appealing. Second, as any code generation tool (and this is true of CoffeeScript as well), you’d better like the generated code. I didn’t, unfortunately. The code that I saw is not the code I would have written. What’s more, I didn’t always find the TypeScript code especially more expressive than what it gets compiled to. I also have a few questions. Is it possible to duck-type interfaces? For example, if I have an IPoint2D interface with x and y coordinates, can I pass any object that has x and y into a function that expects IPoint2D or do I need to necessarily create a class that implements that interface, and new up an instance that explicitly declares its contract? The appeal of dynamic languages is the ability to make objects as you go. This needs to be kept intact. More technical: why are generated variables and functions prefixed with _ rather than the $ that the EcmaScript spec recommends for machine-generated variables? In conclusion, while this is a good contribution to the set of ideas around JavaScript evolution, I don’t expect a lot of adoption outside of the devoted Microsoft developers, but maybe some influence on the language itself. But I’m often wrong. I would certainly not use it because I disagree with the central motivation for doing this: Anders explicitly says he built this because “writing application-scale JavaScript is hard”. I would restate that “writing application-scale JavaScript is hard for people who are used to statically-typed languages”. The community has built a set of good practices over the last few years that do scale quite well, and many people are successfully developing and maintaining impressive applications directly in JavaScript. You can play with TypeScript here: http://www.typescriptlang.org

    Read the article

  • MVVM - child windows and data contexts

    - by GlenH7
    Should a child window have it's own data context (View-Model) or use the data context of the parent? More broadly, should each View have its own View-Model? Are there are any rules to guide making that decision? What if the various View-Models will be accessing the same Model? I haven't been able to find any consistent guidance on my question. The MS definition of MVVM appears to be silent on child windows. For one example, I have created a warning message notification View. It really didn't need a data context since it was passed the message to display. But if I needed to fancy it up a bit, I would have tapped the parent's data context. I have run into another scenario that needs a child window and is more complicated than the notification box. The parent's View-Model is already getting cluttered, so I had planned on generating a dedicated VM for the child window. But I can't find any guidance on whether this is a good idea or what the potential consequences may be. FWIW, I happen to be working in Silverlight, but I don't know that this question is strictly a Silverlight issue.

    Read the article

  • Ubuntu and racadm

    - by lmqcn
    I recently purchased a used poweredge 1850 server and it came with a DRAC card. After wiping the HDD and installing ubuntu server 12.04.3 LTS amd64 on it, I am now trying to gain access to the DRAC which I believe is version 4. I have properly configured the DRAC to use it's own IP on my LAN and when I point my browser to the IP address, I am greeted with the DRAC login page (it has the dell logo and everything). However, after trying the credentials of root/calvin, I was denied access. So I think that the previous owners had set their own password. After doing some reading, it appears that I can reset the credentials to the default using racadm config -g cfgUserAdmin -o cfgUserAdminPassword -i 1 newpassword but upon entering the command, I get this error: bash: /usr/sbin/racadm: No such file or directory This holds true even if I run sudo su prior to running the racadm command. If, however, I run sudo racadm config -g cfgUserAdmin -o cfgUserAdminPassword -i 1 newpassword there are no errors. Yet, when I try to log into the DRAC via the web interface using the credentials of root/newpassword I am still not granted access. I installed the dell utilities via the guide at https://wiki.ubuntu.com/HardwareSupportMachinesServersDellNotes. I first tried to install the 64 bit version that is on the dell repositories, but after that was unsuccessful, I just followed the guide verbatim. No errors were produced in either case. I even followed the information at the bottom of the guide by executing sudo pppd /dev/ttyS1 1382400 crtscts noipdefault noauth lock persist connect 'chat -v "" CLIENT CLIENTSERVER "\\c"' but obviously, replacing the /dev/ttyS1 with the correct information for my system. ls -l /usr/sbin/ | grep racadm yields -rwxr-xr-x 1 root root 87930 Sep 16 04:03 racadm I have tried these credentials after each attempt of changing the password: root/calvin root/newpassword admin/calvin admin/newpassword All have been unsuccessful. What is the next course of action that I should take?

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >