Search Results

Search found 9332 results on 374 pages for 'an original alias'.

Page 198/374 | < Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >

  • Advantages of Singleton Class over Static Class?

    Point 1) Singleton We can get the object of singleton and then pass to other methods. Static Class We can not pass static class to other methods as we pass objects Point 2) Singleton In future, it is easy to change the logic of of creating objects to some pooling mechanism. Static Class Very difficult to implement some pooling logic in case of static class. We would need to make that class as non-static and then make all the methods non-static methods, So entire your code needs to be changed. Point3:) Singleton Can Singletone class be inherited to subclass? Singleton class does not say any restriction of Inheritence. So we should be able to do this as long as subclass is also inheritence.There's nothing fundamentally wrong with subclassing a class that is intended to be a singleton. There are many reasons you might want to do it. and there are many ways to accomplish it. It depends on language you use. Static Class We can not inherit Static class to another Static class in C#. Think about it this way: you access static members via type name, like this: MyStaticType.MyStaticMember(); Were you to inherit from that class, you would have to access it via the new type name: MyNewType.MyStaticMember(); Thus, the new item bears no relationships to the original when used in code. There would be no way to take advantage of any inheritance relationship for things like polymorphism. span.fullpost {display:none;}

    Read the article

  • What is the correct way to restart udev in Ubuntu?

    - by zerkms
    I've changed the name of my eth1 interface to eth0. How to ask udev now to re-read the config? service udev restart and udevadm control --reload-rules don't help. So is there any valid way except of rebooting? (yes, reboot helps with this issue) UPD: yes, I know I should prepend the commands with sudo, but either one I posted above changes nothing in ifconfig -a output: I still see eth1, not eth0. UPD 2: I just changed the NAME property of udev-rule line. Don't know any reason for this to be ineffective. There is no any error in executing of both commands I've posted above, but they just don't change actual interface name in ifconfig -a output. If I perform reboot - then interface name changes as expected. UPD 3: let I explain all the case better ;-) For development purposes I write some script that clones virtual machines (VirtualBox-driven) and pre-sets them up in some way. So I perform a command to clone VM, start it and as long as network interface MAC is changed - udev adds the second rule to network persistent rules. Right after machine is booted for the first time there are 2 rules: eth0, which does not exist, as long as it existed in the original VM image MAC eth1, which exists, but all the configuration in all files refers to eth0, so it is not that good for me So I with sed delete the line with eth0 (it is obsolete and useless in cloned image) and replace eth1 with eth0. So currently I have valid persistent rule, but there is still eth1 in /dev. The issue: I don't want to reboot the machine (it will take another time, which is not good thing on building-VM-stage) and just want to have my /dev rebuilt with some command so I have ready-to-use VM without any reboots.

    Read the article

  • Why unhandled exceptions are useful

    - by Simon Cooper
    It’s the bane of most programmers’ lives – an unhandled exception causes your application or webapp to crash, an ugly dialog gets displayed to the user, and they come complaining to you. Then, somehow, you need to figure out what went wrong. Hopefully, you’ve got a log file, or some other way of reporting unhandled exceptions (obligatory employer plug: SmartAssembly reports an application’s unhandled exceptions straight to you, along with the entire state of the stack and variables at that point). If not, you have to try and replicate it yourself, or do some psychic debugging to try and figure out what’s wrong. However, it’s good that the program crashed. Or, more precisely, it is correct behaviour. An unhandled exception in your application means that, somewhere in your code, there is an assumption that you made that is actually invalid. Coding assumptions Let me explain a bit more. Every method, every line of code you write, depends on implicit assumptions that you have made. Take this following simple method, that copies a collection to an array and includes an item if it isn’t in the collection already, using a supplied IEqualityComparer: public static T[] ToArrayWithItem( ICollection<T> coll, T obj, IEqualityComparer<T> comparer) { // check if the object is in collection already // using the supplied comparer foreach (var item in coll) { if (comparer.Equals(item, obj)) { // it's in the collection already // simply copy the collection to an array // and return it T[] array = new T[coll.Count]; coll.CopyTo(array, 0); return array; } } // not in the collection // copy coll to an array, and add obj to it // then return it T[] array = new T[coll.Count+1]; coll.CopyTo(array, 0); array[array.Length-1] = obj; return array; } What’s all the assumptions made by this fairly simple bit of code? coll is never null comparer is never null coll.CopyTo(array, 0) will copy all the items in the collection into the array, in the order defined for the collection, starting at the first item in the array. The enumerator for coll returns all the items in the collection, in the order defined for the collection comparer.Equals returns true if the items are equal (for whatever definition of ‘equal’ the comparer uses), false otherwise comparer.Equals, coll.CopyTo, and the coll enumerator will never throw an exception or hang for any possible input and any possible values of T coll will have less than 4 billion items in it (this is a built-in limit of the CLR) array won’t be more than 2GB, both on 32 and 64-bit systems, for any possible values of T (again, a limit of the CLR) There are no threads that will modify coll while this method is running and, more esoterically: The C# compiler will compile this code to IL according to the C# specification The CLR and JIT compiler will produce machine code to execute the IL on the user’s computer The computer will execute the machine code correctly That’s a lot of assumptions. Now, it could be that all these assumptions are valid for the situations this method is called. But if this does crash out with an exception, or crash later on, then that shows one of the assumptions has been invalidated somehow. An unhandled exception shows that your code is running in a situation which you did not anticipate, and there is something about how your code runs that you do not understand. Debugging the problem is the process of learning more about the new situation and how your code interacts with it. When you understand the problem, the solution is (usually) obvious. The solution may be a one-line fix, the rewrite of a method or class, or a large-scale refactoring of the codebase, but whatever it is, the fix for the crash will incorporate the new information you’ve gained about your own code, along with the modified assumptions. When code is running with an assumption or invariant it depended on broken, then the result is ‘undefined behaviour’. Anything can happen, up to and including formatting the entire disk or making the user’s computer sentient and start doing a good impression of Skynet. You might think that those can’t happen, but at Halting problem levels of generality, as soon as an assumption the code depended on is broken, the program can do anything. That is why it’s important to fail-fast and stop the program as soon as an invariant is broken, to minimise the damage that is done. What does this mean in practice? To start with, document and check your assumptions. As with most things, there is a level of judgement required. How you check and document your assumptions depends on how the code is used (that’s some more assumptions you’ve made), how likely it is a method will be passed invalid arguments or called in an invalid state, how likely it is the assumptions will be broken, how expensive it is to check the assumptions, and how bad things are likely to get if the assumptions are broken. Now, some assumptions you can assume unless proven otherwise. You can safely assume the C# compiler, CLR, and computer all run the method correctly, unless you have evidence of a compiler, CLR or processor bug. You can also assume that interface implementations work the way you expect them to; implementing an interface is more than simply declaring methods with certain signatures in your type. The behaviour of those methods, and how they work, is part of the interface contract as well. For example, for members of a public API, it is very important to document your assumptions and check your state before running the bulk of the method, throwing ArgumentException, ArgumentNullException, InvalidOperationException, or another exception type as appropriate if the input or state is wrong. For internal and private methods, it is less important. If a private method expects collection items in a certain order, then you don’t necessarily need to explicitly check it in code, but you can add comments or documentation specifying what state you expect the collection to be in at a certain point. That way, anyone debugging your code can immediately see what’s wrong if this does ever become an issue. You can also use DEBUG preprocessor blocks and Debug.Assert to document and check your assumptions without incurring a performance hit in release builds. On my coding soapbox… A few pet peeves of mine around assumptions. Firstly, catch-all try blocks: try { ... } catch { } A catch-all hides exceptions generated by broken assumptions, and lets the program carry on in an unknown state. Later, an exception is likely to be generated due to further broken assumptions due to the unknown state, causing difficulties when debugging as the catch-all has hidden the original problem. It’s much better to let the program crash straight away, so you know where the problem is. You should only use a catch-all if you are sure that any exception generated in the try block is safe to ignore. That’s a pretty big ask! Secondly, using as when you should be casting. Doing this: (obj as IFoo).Method(); or this: IFoo foo = obj as IFoo; ... foo.Method(); when you should be doing this: ((IFoo)obj).Method(); or this: IFoo foo = (IFoo)obj; ... foo.Method(); There’s an assumption here that obj will always implement IFoo. If it doesn’t, then by using as instead of a cast you’ve turned an obvious InvalidCastException at the point of the cast that will probably tell you what type obj actually is, into a non-obvious NullReferenceException at some later point that gives you no information at all. If you believe obj is always an IFoo, then say so in code! Let it fail-fast if not, then it’s far easier to figure out what’s wrong. Thirdly, document your assumptions. If an algorithm depends on a non-trivial relationship between several objects or variables, then say so. A single-line comment will do. Don’t leave it up to whoever’s debugging your code after you to figure it out. Conclusion It’s better to crash out and fail-fast when an assumption is broken. If it doesn’t, then there’s likely to be further crashes along the way that hide the original problem. Or, even worse, your program will be running in an undefined state, where anything can happen. Unhandled exceptions aren’t good per-se, but they give you some very useful information about your code that you didn’t know before. And that can only be a good thing.

    Read the article

  • Are you at Super Computing 10?

    - by Daniel Moth
    Like last year, I was going to attend SC this year, but other events are unfortunately keeping me here in Seattle next week. If you are going to be in New Orleans, have fun and be sure not to miss out on the following two opportunities. MPI Debugging UX Study Throughout the week, my team is conducting 90-minute studies on debugging MPI applications within Visual Studio. In exchange for your feedback (under NDA) you will receive a Microsoft Gratuity (and the knowledge that you are impacting the development of Visual Studio). If you are interested, sign up at the Microsoft Information Desk in the Exhibitor Hall during exhibit hours. Outside of exhibit hours, send email to [email protected]. If you took part in the GPGPU study, this is very similar except it is for MPI. Microsoft High Performance Computing Summit On Monday 15th, the Microsoft annual user group meeting takes place. Shuttle transportation and lunch is provided. For full details of this event and to register, please visit the official event page. Comments about this post welcome at the original blog.

    Read the article

  • beginner - best way to do a 'Confirm' page? [closed]

    - by W_P
    I am a beginning web app developer, wondering about the best way to implement a "Confirm Page" upon form submission. I have heard that it's best for the script that a form POSTs to to be implemented by handling the POST data and then redirecting to another page, so the user isn't directly viewing the page that was POSTed to. My question is about the best way to implement a "Confirm before data save" page. Do I Have my form POST to a script, which marshals the data, puts in a GET, and redirects to the confirm page, which unmarshals and displays the data in another form, where the user can then either confirm (which causes another POST to a script that actually saves the data) or deny (which causes the user to be redirected back to the original form, with their input added)? Have my form POST directly to the confirm page, which is displayed to the user and then, like #1, gives the user the option to confirm or deny? Have my form GET the confirm page, which then does the expected behavior? I feel like there is a common-sense answer to this question that I am just not getting.

    Read the article

  • What happened to the Journal of Game Development?

    - by Ricket
    The lengthy mission statement from its website states: The lack of game-specific research has prevented many in the academic community from embracing game development as a serious field of study. The Journal of Game Development (JOGD), however, provides a much-needed, peer-reviewed, medium of communication and the raison d'etre for serious academic research focused solely on game-related issues. The JOGD provides the vehicle for disseminating research and findings indigenous to the game development industry. It is an outlet for peer-reviewed research that will help validate the work and garner acceptance for the study of game development by the academic community. JOGD will serve both the game development industry and academic community by presenting leading-edge, original research, and theoretical underpinnings that detail the most recent findings in related academic disciplines, hardware, software, and technology that will directly affect the way games are conceived, developed, produced, and delivered. The Journal of Game Development was established in 2003. It's hard to find any information about the issues but at four issues per year, I estimate the last issue was distributed sometime in 2005 or 2006. It had a good editorial board of college professors and a founding editor from Ubisoft. The list of articles looks good. The price was reasonable. So what happened to it? Its website recently went down but you can see the last Archive.org version. The editor-in-chief is a professor at my school so I intend to ask him in person in a week or two, but I thought I'd see what you might be able to dig up about it first. Of course I will be sure to add an answer with his official word on the matter at that time.

    Read the article

  • How does eMail encryption work?

    - by Dummy Derp
    I have been going over YouTube watching videos on eMail encryption and everyone seems to explain it from a different perspective. Some do it for a CompTIA exam while others just provide a primer. Here is what I understood: Step1: You compose an email that you want to send. Without encryption, it will be simple ASCII text that will be visible to anyone along the way. Step2: You generate a digital signature to make sure that nobody gets to re-transmit your email and claim it was you. Digital Signature is generated using Sender's private key which is usually a hash of the password and is then combined with the original message to form one long hash string. These signatures are one-time-use-only and a new one is calculated for every email. Step 3: You encrypt the compose of your email using Receiver's public key so that the only person who can read it is the intended receiver using their private key Step 4: When you hit the send the email, what is transmitted now is gibberish to everyone apart from the intended receiver who will decrypt is using their private key And there are various ways to do it like PEM, PGP, etc. Correct me where I am wrong or refine where necessary.

    Read the article

  • OOP concept: is it possible to update the class of an instantiated object?

    - by Federico
    I am trying to write a simple program that should allow a user to save and display sets of heterogeneous, but somehow related data. For clarity sake, I will use a representative example of vehicles. The program flow is like this: The program creates a Garage object, which is basically a class that can contain a list of vehicles objects Then the users creates Vehicles objects, these Vehicles each have a property, lets say License Plate Nr. Once created, the Vehicle object get added to a list within the Garage object --Later on--, the user can specify that a given Vehicle object is in fact a Car object or a Truck object (thus giving access to some specific attributes such as Number of seats for the Car, or Cargo weight for the truck) At first sight, this might look like an OOP textbook question involving a base class and inheritance, but the problem is more subtle because at the object creation time (and until the user decides to give more info), the computer doesn't know the exact Vehicle type. Hence my question: how would you proceed to implement this program flow? Is OOP the way to go? Just to give an initial answer, here is what I've came up until now. There is only one Vehicle class and the various properties/values are handled by the main program (not the class) through a dictionary. However, I'm pretty sure that there must be a more elegant solution (I'm developing using VB.net): Public Class Garage Public GarageAdress As String Private _ListGarageVehicles As New List(Of Vehicles) Public Sub AddVehicle(Vehicle As Vehicles) _ListGarageVehicles.Add(Vehicle) End Sub End Class Public Class Vehicles Public LicensePlateNumber As String Public Enum VehicleTypes Generic = 0 Car = 1 Truck = 2 End Enum Public VehicleType As VehicleTypes Public DictVehicleProperties As New Dictionary(Of String, String) End Class NOTE that in the example above the public/private modifiers do not necessarily reflect the original code

    Read the article

  • Which toolkit to use for 3D MMO game development?

    - by Ahmet Yildirim
    Lately i've been thinking about which path to follow for developing an 3D Online game. I have googled a lot but i couldnt find a good article that covers both game development and online server & client development in same context. This question has been in mind for about 2 weeks now. So.. yesterday i started developing a game from scratch by using Irrlicht.Net Wrapper to use Socket library of .NET which im already familiar. But i found out .Net wrapper of Irrlicht is not totally finished yet and still have lacks from the original. So i lost all my motives :/. So i thought why not to ask the experts before i run into another dead end... What Game Engine and Networking Library is best way to go for 3D MMO Development? Here is some of my early conclusions: Please let me know the ones im wrong. C++: Best Performance for 3D Graphics. Most Game Engines has native C++ Libraries. Lacks a Solid Socket Library .NETC++ Lacks Intellisense Support. C#: Intellisense Support NET Socket Library Lacks 3D Graphics Performance Lacks a native solid 3D Game Engine

    Read the article

  • Windows Azure BidNow Sample &ndash; definitely worth a look

    - by Eric Nelson
    [Quicklink: download new Windows Azure sample from http://bit.ly/bidnowsample] On Mondays (17th May) in the  6 Weeks of Windows Azure training (Now full) Live Meeting call, Adrian showed BidNow as a sample application built for Windows Azure. I was aware of BidNow but had not found the time to take a look at it nor seems it running before. Adrian convinced me it was worth some a further look. In brief I like it :-) It is more than Hello World, but still easy enough to follow. Bid Now is an online auction site designed to demonstrate how you can build highly scalable consumer applications using Windows Azure. It is built using Visual Studio 2008, Windows Azure and uses Windows Azure Storage. Auctions are processed using Windows Azure Queues and Worker Roles. Authentication is provided via Live Id. Bid Now works with the Express versions of Visual Studio and above. There are extensive setup instructions for local and cloud deployment You can download from http://bit.ly/bidnowsample (http://code.msdn.microsoft.com/BidNowSample) and also check out David original blog post. Related Links UK based? Sign up to UK fans of Windows Azure on ning Check out the Microsoft UK Windows Azure Platform page for further links

    Read the article

  • Activist shared printing material gallery

    - by Dave
    What would you say would be the best way to do this: We would like to create a section on our activist community FB page and website in order to share with everyone images and files ready for printing panflets, brochures, t-shirts, stickers, etc. Let's say we have some cool slogans for t-shirts, so we would like to show them on a gallery, and offer for download the original design files needed for a print shop to create the t-shirts. And the same thing for all other kinds of media. We want to enable anyone to be able to just download the files for free, and easily create printed materials with them. But besides offering this hybrid between picture gallery and downloads manager, we would also like to make it very easy for anyone to upload and share their own files with the community, to make it a true collaboration initiative, be it that they get posted automatically, or that we first review and approve all uploads. Cafepress or Spreadshirt let you upload your design and sell your own merchandise. We need something similar, but where people can then download working files for making quality printings and materials. What apps, tools, services or methods are out there with which you think this could be best done?? We have some ideas, but we would like to hear some more!!

    Read the article

  • What are the legal considerations when forking a BSD-licensed project?

    - by Thomas Owens
    I'm interested in forking a project released under a two-clause BSD license: Copyright (c) 2010 {copyright holder} All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: (1) Redistributions of source code must retain the above copyright notice, this list of conditions and the disclaimer at the end. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. (2) Neither the name of {copyright holder} nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. DISCLAIMER THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. I've never forked a project before, but this project is very similar to something that I need/want. However, I'm not sure how far I'll get, so my plan is to pull the latest from their repository and start working. Maybe, eventually, I'll get it to where I want it, and be able to release it. Is this the right approach? How, exactly, does this impact forking of the project? How do I track who owns what components or sections (what's copyright me, what's copyright the original creators, once I start stomping over their code base)? Can I fork this project? What must I do prior to releasing, and when/if I decide to release the software derived from this BSD-licensed work?

    Read the article

  • Semantic Form Markup for Yes or No Questions

    - by sholsinger
    I frequently receive mock-ups of HTML forms with the following prototype: Some long winded yes or no question?   (o) Yes   ( ) No The (o) and ( ) in this prototype represent radio buttons. My personal view is that if the question has only a true or false value then it should be a check box. That said, I have seen this sort of "layout" from almost every designer I've ever worked with. If I were not to question their decision, or question the client's decision, I'd probably mark it up like this: <p class="pseudo_label">Some long winded yes or no question?</p> <input type="radio" name="the_question" id="the_question_yes" value="1"> <label for="the_question_yes" class="after_radio">Yes</label> <input type="radio" name="the_question" id="the_question_no" value="0"> <label for="the_question_no" class="after_radio">No</label> I really don't want to do that. I want to push back and convince them that this should really be a check box and not two radio buttons. But my question is, if I can't convince them – you're welcome to help me try – how should I code that original design requirement such that it is semantic and at least understandable for screen reader users? If I were able to convince my tormentors to change their minds, I would likely code it in the following fashion: <label for="the_question">Some long winded yes or no question?</label> <input type="checkbox" name="the_question" id="the_question" value="1"> What do you think about this issue? Should I push back? Possibly more importantly is either way semantically correct? UPDATE: I have posted a related question on the UI SE per your suggestions. You can find it here: http://ui.stackexchange.com/q/3335/3493

    Read the article

  • HLSL - Creating Shadows in 2D

    - by richard
    The way that I create shadows is by the following technique: http://www.catalinzima.com/2010/07/my-technique-for-the-shader-based-dynamic-2d-shadows/ But I have questions to HLSL. The way that I currently do it is, I have a black and white image, where Black means 'object', and white means 'nothing'. I then distort the image like in the tutorial. I do this with a pixel shader, but instead of rendering to the screen, I render to a texture, back to my application. I then take this, and create the shadows, and then send it back to the graphics card to undo the distortion, after the shadow has been added - this comes back and I have a stencil of shadow. I can put this ontop of the original image and send them back to the graphics card, which then puts them on the screen. To me this is alot of back and forth. Is there a way i can avoid this? The problem that I am having is that I need to basically go through all positions in the texture 3 times, and use the new new texture every time instead of the orginal one. I tried to read up on Passes, but i don't think that i am heading in the right direction there. Help?

    Read the article

  • When does the "Do One Thing" paradigm become harmful?

    - by Petr
    For the sake of argument here's a sample function that prints contents of a given file line-by-line. Version 1: void printFile(const string & filePath) { fstream file(filePath, ios::in); string line; while (file.good()) { getline(file, line); cout << line << endl; } } I know it is recommended that functions do one thing at one level of abstraction. To me, though code above does pretty much one thing and is fairly atomic. Some books (such as Robert C. Martin's Clean Code) seem to suggest breaking the above code into separate functions. Version 2: void printLine(const string & line) { cout << line << endl; } void printLines(fstream & file) { string line; while (file.good()) { getline(file, line); printLine(line); } } void printFile(const string & filePath) { fstream file(filePath, ios::in); printLines(file); } I understand what they want to achieve (open file / read lines / print line), but isn't it a bit of overkill? The original version is simple and in some sense already does one thing - prints a file. The second version will lead to a large number of really small functions which may be far less legible than the first version. Wouldn't it be, in this case, better to have the code at one place? At which point does the "Do One Thing" paradigm become harmful?

    Read the article

  • JDK 7 Feature Complete Milestone Reached

    - by Henrik Ståhl
    The JDK 7 project has reached Feature Complete (FC). This means that development and QA have finished all planned feature and test development work in the release and are moving the focus to testing and bug fixing on all supported JDK 7 platforms. This is a major step towards JDK 7 General Availability (GA) and implies that we are tracking close to the plan published on openjdk.java.net. (The original plan was FC on 12/16. We hit this less than a week late, but verifying that everything was done in time took a couple of weeks due to the intervening holidays.) The definition of the FC milestone allows for exceptions to be integrated later. There are very few such exceptions in the project, the most prominent being updated JAXP/JAXB/JAX-WS and integration of the enhanced JMX agent from JRockit. Our project management does not expect the exceptions to have any negative impact on the release plan. The project may still be delayed if the Expert Groups for the JSRs included in Java SE 7 (203, 292, 334, 336) decide to introduce changes which cannot be accomodated within the existing schedule. Apart from that caveat, Oracle remains confident with the published plan.

    Read the article

  • How do I use depth testing and texture transparency together in my 2.5D world?

    - by nbolton
    Note: I've already found an answer (which I will post after this question) - I was just wondering if I was doing it right, or if there is a better way. I'm making a "2.5D" isometric game using OpenGL ES (JOGL). By "2.5D", I mean that the world is 3D, but it is rendered using 2D isometric tiles. The original problem I had to solve was that my textures had to be rendered in order (from back to front), so that the tiles overlapped properly to create the proper effect. After some reading, I quickly realised that this is the "old hat" 2D approach. This became difficult to do efficiently, since the 3D world can be modified by the player (so stuff can appear anywhere in 3D space) - so it seemed logical that I take advantage of the depth buffer. This meant that I didn't have to worry about rendering stuff in the correct order. However, I faced a problem. If you use GL_DEPTH_TEST and GL_BLEND together, it creates an effect where objects are blended with the background before they are "sorted" by z order (meaning that you get a weird kind of overlap where the transparency should be). Here's some pseudo code that should illustrate the problem (incidentally, I'm using libgdx for Android). create() { // ... // some other code here // ... Gdx.gl.glEnable(GL10.GL_DEPTH_TEST); Gdx.gl.glEnable(GL10.GL_BLEND); } render() { Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); Gdx.gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA); // ... // bind texture and create vertices // ... } So the question is: How do I solve the transparency overlap problem?

    Read the article

  • How to use the Raring/Saucy netboot installer to install Precise?

    - by mikepurvis
    We have a Haswell motherboard with onboard ethernet controllers which are not supported in the Precise (3.2) kernel. However, we're using netboot installation, and we'd really like to stick with the LTS version. Once the Precise install is completed, we can install the linux-generic-lts-saucy package, which gets us the ethernet hardware support which is ultimately required. So, our options are: Plug in a USB-Ethernet (or even wifi) dongle, perform the install that way. Modify the Precise installer to somehow include the required driver (a udeb, or some early_command invocation?) Modify the Raring installer (3.8 kernel, which supports the device) to instead install Precise. If it's possible the third option seems like the simplest and most logical to me. Now, we are already using the precise-updates installer (Aug 2013), as opposed to the original April 2012 installer. However, the precise-updates installer still appears to use the 3.2 kernel. I'm already comfortable with preseeding and modifying the netboot initrd. So my question is, can I somehow modify the Raring/Saucy netboot initrd to instead install Precise? Thanks.

    Read the article

  • Ubuntu 12.04 freezes when booting

    - by Agustín González
    Translated I installed Ubuntu 12.04 LTS from the LiveCD, after finalizing the installation process and booting correctly, I applied the pending updates, which asked me to reboot. After rebooting, an error appeared saying "Out of Range". I pressed CTRL+ALT+F1, login to the tty1 terminal and edit the xorg.conf file and add VertRefresh 50.0 - 60.0 to it, which would solve the "Out of Range" problem that was mentioned before. After applying the changed and rebooting again, the following boot screen is all I see now: It freezes there. I even waited 2 hours and nothing happened. Can anybody help? Thank you! Original Instale Ubuntu 12.04 LTS desde el Live CD, al finalizar la instalación inicio el sistema operativo e inicia correctamente, después de aplicar actualizaciones me solicita reiniciar en lo cual acepto. Al volver a iniciar me daba un erro de "Fuera de rango", aprieto CTRL + ALT + F1, me logueo y edito el archivo xorg.conf en la sección Screen y agrego "VertRefresh 50.0 - 60.0", lo cual solucionaría el problema de "Fuera de rango", al aplicar los cambios, vuelvo a iniciar y solamente me aparece la pantalla de inicio (Véase imagen: http://t.bb/fH) y queda colgado, lo deje por lo menos 2 horas así y nada sucedió. ¿Alguien puede ayudarme? Gracias!

    Read the article

  • Integration error in high velocity

    - by Elektito
    I've implemented a simple simulation of two planets (simple 2D disks really) in which the only force is gravity and there is also collision detection/response (collisions are completely elastic). I can launch one planet into orbit of the other just fine. The collision detection code though does not work so well. I noticed that when one planet hits the other in a free fall it speeds backward and goes much higher than its original position. Some poking around convinced me that the simplistic Euler integration is causing the error. Consider this case. One object has a mass of 1kg and the other has a mass equal to earth. Say the object is 10 meters above ground. Assume that our dt (delta t) is 1 second. The object goes to the height of 9 meters at the end of the first iteration, 7 at the end of the second, 4 at the end of the third and 0 at the end of the fourth iteration. At this points it hits the ground and bounces back with the speed of 10 meters per second. The problem is with dt=1, on the first iteration it bounces back to a height of 10. It takes several more steps to make the object change its course. So my question is, what integration method can I use which fixes this problem. Should I split dt to smaller pieces when velocity is high? Or should I use another method altogether? What method do you suggest? EDIT: You can see the source code here at github:https://github.com/elektito/diskworld/

    Read the article

  • double screen in ubuntu 12.04?

    - by johan
    I am using ubuntu 12.04 and my video card is ATI Radeon 5000. I cannot use double screen (extended version). I get this error The selected configuration for displays could not be applied requested position/size for CRTC 148 is outside the allowed limit: position=(1280, 0), size=(1280, 768), maximum=(1440, 1440) I tried all display settings but it does not work. Some outputs from the system settings: root@ubuntu:~# lshw -C display *-display description: VGA compatible controller product: Madison [Radeon HD 5000M Series] vendor: Hynix Semiconductor (Hyundai Electronics) physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: pm pciexpress msi vga_controller bus_master cap_list rom configuration: driver=fglrx_pci latency=0 resources: irq:46 memory:e0000000-efffffff memory:f0020000-f003ffff ioport:d000(size=256) memory:f0000000-f001ffff root@ubuntu:~# aticonfig --initial Uninitialised file found, configuring. Using /etc/X11/xorg.conf Saving back-up to /etc/X11/xorg.conf.original-0 root@ubuntu:~# cat /etc/X11/xorg.conf Section "ServerLayout" Identifier "aticonfig Layout" Screen 0 "aticonfig-Screen[0]-0" 0 0 EndSection Section "Module" Load "glx" EndSection Section "Monitor" Identifier "aticonfig-Monitor[0]-0" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Device" Identifier "aticonfig-Device[0]-0" Driver "fglrx" BusID "PCI:1:0:0" EndSection Section "Screen" Identifier "Default Screen" DefaultDepth 24 EndSection Section "Screen" Identifier "aticonfig-Screen[0]-0" Device "aticonfig-Device[0]-0" Monitor "aticonfig-Monitor[0]-0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection I would appreciate any suggestions how to solve the problem. Thank you

    Read the article

  • SharePoint 2010 PowerShell Script to Find All SPShellAdmins with Database Name

    - by Brian Jackett
    Problem     Yesterday on Twitter my friend @cacallahan asked for some help on how she could get all SharePoint 2010 SPShellAdmin users and the associated database name.  I spent a few minutes and wrote up a script that gets this information and decided I’d post it here for others to enjoy.     Background     The Get-SPShellAdmin commandlet returns a listing of SPShellAdmins for the given database Id you pass in, or the farm configuration database by default.  For those unfamiliar, SPShellAdmin access is necessary for non-admin users to run PowerShell commands against a SharePoint 2010 farm (content and configuration databases specifically).  Click here to read an excellent guest post article my friend John Ferringer (twitter) wrote on the Hey Scripting Guy! blog regarding granting SPShellAdmin access.  Solution     Below is the script I wrote (formatted for space and to include comments) to provide the information needed. Click here to download the script.   # declare a hashtable to store results $results = @{}   # fetch databases (only configuration and content DBs are needed) $databasesToQuery = Get-SPDatabase | Where {$_.Type -eq 'Configuration Database' -or $_.Type -eq 'Content Database'}   # for each database get spshelladmins and add db name and username to result $databasesToQuery | ForEach-Object {$dbName = $_.Name; Get-SPShellAdmin -database $_.id | ForEach-Object {$results.Add($dbName, $_.username)}}   # sort results by db name and pipe to table with auto sizing of col width $results.GetEnumerator() | Sort-Object -Property Name | ft -AutoSize     Conclusion     In this post I provided a script that outputs all of the SPShellAdmin users and the associated database names in a SharePoint 2010 farm.  Funny enough it actually took me longer to boot up my dev VM and PowerShell (~3 mins) than it did to write the first working draft of the script (~2 mins).  Feel free to use this script and modify as needed, just be sure to give credit back to the original author.  Let me know if you have any questions or comments.  Enjoy!         -Frog Out   Links PowerShell Hashtables http://technet.microsoft.com/en-us/library/ee692803.aspx SPShellAdmin Access Explained http://blogs.technet.com/b/heyscriptingguy/archive/2010/07/06/hey-scripting-guy-tell-me-about-permissions-for-using-windows-powershell-2-0-cmdlets-with-sharepoint-2010.aspx

    Read the article

  • Efficient use of Bundling

    - by ACShorten
    One of the discussions I am having with customers and consulting people is about the use of Bundling and its appropriate use. We introduced Bundling post release in the V2.2 code line to allow partners and consultants to build solutions using the Configuration Tools objects such as UI Maps, Service Scripts, Business Objects, Business Services etc and then export and migrate them as solutions. Whilst that was the original intent I have found a few teams using the facility for other data and then complaining about the efficiency or relevance of the tool. Here are a number of guidelines to help optimize the use of Bundling for your implementation: Not all objects can be bundled. Only specific objects in the product can be bundled. These are targetted at Configuration Tools objects and a select group of other objects that are required for these objects. Maintenance Objects with the option "Eligble for Bundling" set to Y (and also contains a Bundling Add BO). Add objects to the Bundle as you complete them - Bundling can have issues with sequencing objects. The best way of combating this is to add objects to the bundle as you complete them. This will help with making sure you sequence the loading of the objects as you are building them in the correct order. Remember Bundling was designed for developers and partners to deliver solutions. If you leave adding objects to a Bundle using the Bundle Export zones then you will have less control of what sequence they are applied and this can cause timing issues. Bundling takes the latest revision  - If you combine Bundling with Revision Control then the Bundling will take the latest release of the object at the time of the export operation. Bundling and Version Control products - If you use a version control tool to control your java code then you can also check in the Bundle to associate a release between code and a bundle. Bundling is quite a powerful feature of the Oracle Utilities Application Framework that allows sales, partners, consultants and customers to package and import their Configuration Tools based solutions.

    Read the article

  • Is there a way to legally create a game mod?

    - by Rodrigo Guedes
    Some questions about it: If I create a funny version of a copyrighted game and sell it (crediting the original developers) would it be considered a parody or would I need to pay royalties? If I create a game mod for my own personal use would it be legal? What if I gave it for free to a friend? Is there a general rule about it or it depends on the developer will? P.S.: I'm not talking about cloning games like this question. It's all about a game clearly based on another. Something like "GTA Gotham City" ;) EDIT: This picture that I found over the internet illustrate what I'm talking about: Just in case I was not clear: I never created a mod game. I was just wondering if it would be legally possible before trying to do it. I'm not apologizing piracy. I pay dearly for my games (you guys have no idea how expensive games are in Brazil due to taxes). Once more I say that the question is not about cloning. Cloning is copy something and try to make your version look like a brand new product. Mods are intended to make reference to one or more of its source. I'm not sure if it can be done legally (if I knew I wasn't asking) but I'm sure this question is not a duplicate. Even so, I trust in the moderators and if they close my question I will not be offended - at least I had an opportunity to explain myself and got 1 good answer (by the time I write this, maybe some more will be given later).

    Read the article

  • Mapping XFCE4/xRDP sessions to users

    - by garrilla
    I have Ubuntu 13.10 with Xubuntu Desktop - XFCE4. I'm trying to use XDRP to allow MS Windows users to login to the machine with their own user. I've been a lot around the houses with this! I've find two half-way solutions, but can't get them to work as I'd like... 1) in /etc/xrdp/xrdp.ini I set the port to -1 [xrdp1] name=sesman-Xvnc lib=libvnc.so username=ask password=ask ip=127.0.0.1 port=-1 each time any user logs on they get a new session - they can never go back to their original session 2) in /etc/xrdp/xrdp.ini I set the port to 5912 (e.g) [xrdp1] name=sesman-Xvnc lib=libvnc.so username=ask password=ask ip=127.0.0.1 port=5912 each time any user logs on they always log on to the same session irrespective of their logon details ??) I found a mid-way solution, to create a lot of sessions by adding adding additional options in the xrdp.ini e.g. [xrdp8] name=Bob's Logon lib=libvnc.so username=ask password=ask ip=127.0.0.1 port=5913 [xrdp9] name=Jill's Logon lib=libvnc.so username=ask password=ask ip=127.0.0.1 port=5914 and so on, but he problem with this is that Jill can still log into Bob's remote session ??? Is it possible to to do what I'm trying to do? Maybe I have to use different tools?

    Read the article

< Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >