Search Results

Search found 13293 results on 532 pages for 'small ticket'.

Page 280/532 | < Previous Page | 276 277 278 279 280 281 282 283 284 285 286 287  | Next Page >

  • My new anti-patent BSD-based license: necessary and effective? [closed]

    - by paperjam
    I am writing multimedia software in a domain that is rife with software patents. I want to open source my software but only for the benefit of those who don't play the patent game, that is enthusiasts, small companies, research projects, etc. The idea is, if my code would infringe a software patent somewhere and a company pays to license that patent, they then lose the right to use and distribute my software. Now I detest license proliferation as much as anyone but I can't find an existing OSI approved license that does this. The GPL comes close, but it only restricts distribution, not use. I want to stop someone using my software should they obtain a patent license to do so. Does another license do this job? Is the wording below unambiguous? - I don't want a legal opinion, just whether it would be interpreted as I intend. Copyright (c) <year>, <copyright holder> All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: [ three standard new-BSD conditions not shown here] * No patents are licensed from any third party in respect of redistribution or use of this software or its derivatives unless the patent license is arranged to permit free use and distribution by all. THIS SOFTWARE IS... [standard BSD disclaimer not shown here]

    Read the article

  • BizTalk 2009 - Error when Testing Map with Flat File Source Schema

    - by StuartBrierley
    I have recently been creating some flat file schemas using the BizTalk Server 2009 Flat File Schema Wizard.  I have then been mapping these flat file schemas to a "normal" xml schema format. I have not previsouly had any cause to map flat files and ran into some trouble when testing the first of these flat file maps; with an instance of the flat file as the source it threw an XSL transform error: Test Map.btm: error btm1050: XSL transform error: Unable to write output instance to the following <file:///C:\Documents and Settings\sbrierley\Local Settings\Temp\_MapData\Test Mapping\Test Map_output.xml>. Data at the root level is invalid. Line 1, position 1. Due to the complexity of the map in question I decided to created a small test map using the same source and destination schemas to see if I could pinpoint the problem.  Although the source message instance vaildated correctly against the flat file schema, when I then tested this simplified map I got the same error. After a time of fruitless head scratching and some serious google time I figured out what the problem was. Looking at the map properties I noticed that I had the test map input set to "XML" - for a flat file instance this should be set to "native".

    Read the article

  • How to display/add play button on youtube image?

    - by Zakir Sajib
    I have embedded the youtube image from youtube server which is "http://img.youtube.com/vi/0.jpg", now there is no play button shows up on middle of the image as we see usually any youtube videos! I tried to use the following code to get an image on top of the youtube image but it shows bigger picture, i know why <a class="fancybox" href="#video"> <img src="/wp-content/themes/mytheme/images/play_button.png" no-repeat width: 0px height:0px; style="background: url(http://img.youtube.com/vi/<?php echo $youtubeid ; ?>/0.jpg) transparent" width="180" height="150"/> <div id ="video"> // here is my embedded youtube usual code </div> both images shown in 180 x 150 size, but thats not what i want. I want youtube image will be shown in 180 x 150 size and play button image (play_button.png) will be display in middle of the youtube image in small size. Any clue in css or coding in php will be great favour.

    Read the article

  • Understanding and memorizing git rebase parameters

    - by Robert Dailey
    So far the most confusing portion of git is rebasing onto another branch. Specifically, it's the command line arguments that are confusing. Each time I want to rebase a small piece of one branch onto the tip of another, I have to review the git rebase documentation and it takes me about 5-10 minutes to understand what each of the 3 main arguments should be. git rebase <upstream> <branch> --onto <newbase> What is a good rule of thumb to help me memorize what each of these 3 parameters should be set to, given any kind of rebase onto another branch? Bear in mind I have gone over the git-rebase documentation again, and again, and again, and again (and again), but it's always difficult to understand (like a boring scientific white-paper or something). So at this point I feel I need to involve other people to help me grasp it. My goal is that I should never have to review the documentation for these basic parameters. I haven't been able to memorize them so far, and I've done a ton of rebases already. So it's a bit unusual that I've been able to memorize every other command and its parameters so far, but not rebase with --onto.

    Read the article

  • Tips about how to spread Object Oriented practices

    - by Augusto
    I work for a medium company that has around 250 developers. Unfortunately, lots of them are stuck in a procedural way of thinking and some teams constantly deliver big Transactional Script applications, when in fact the application contains rich logic. They also fail to manage the design dependencies, and end up with services which depend on another large number of services (a clean example of Big Ball of Mud). My question is: Can you suggest how to spread this type of knowledge? I know that the surface of the problem is that these applications have a poor architecture and design. Another issue is that there are some developers who are against writing any kind of test. A few things I'm doing to change this (but I'm either failing or the change is too small are) Running presentations about design principles (SOLID, clean code, etc). Workshops about TDD and BDD. Coaching teams (this includes using sonar, findbugs, jdepend and other tools). IDE & Refactoring talks. A few things I'm thinking to do in the future (but I'm concern that they might not be good) Form a team of OO evangelists, who disseminate an OO way of thinking in differet teams (these people would need to change teams every few months). Running design review sessions, to criticise the design and suggest improvements (even if the improvements are not done because of time constraints, I think this might be useful) . Something I found with the teams I coach, is that as soon as I leave them, they revert back to the old practices. I know I don't spend a lot of time with them, usually just one month. So whatever I'm doing, it doesn't stick. I'm sorry this question is spattered with frustration, but the alterative to write this was to hit my head on the wall until I pass out.

    Read the article

  • How often is seq used in Haskell production code?

    - by Giorgio
    I have some experience writing small tools in Haskell and I find it very intuitive to use, especially for writing filters (using interact) that process their standard input and pipe it to standard output. Recently I tried to use one such filter on a file that was about 10 times larger than usual and I got a Stack space overflow error. After doing some reading (e.g. here and here) I have identified two guidelines to save stack space (experienced Haskellers, please correct me if I write something that is not correct): Avoid recursive function calls that are not tail-recursive (this is valid for all functional languages that support tail-call optimization). Introduce seq to force early evaluation of sub-expressions so that expressions do not grow to large before they are reduced (this is specific to Haskell, or at least to languages using lazy evaluation). After introducing five or six seq calls in my code my tool runs smoothly again (also on the larger data). However, I find the original code was a bit more readable. Since I am not an experienced Haskell programmer I wanted to ask if introducing seq in this way is a common practice, and how often one will normally see seq in Haskell production code. Or are there any techniques that allow to avoid using seq too often and still use little stack space?

    Read the article

  • Price Drop for Processor based License on Exalytics

    - by Mike.Hallett(at)Oracle-BI&EPM
    ·       33% reduction in the list `per processor` license pricing for the Oracle BI Foundation Suite ·       New capacity-based licensing which allows customers to think big & start small, significantly lowering the entry price point for an Exalytics. Oracle BI Software List Price changes In response to new powerful platforms like the in-memory Oracle Exalytics with 40 cpu cores (counted under Oracle pricing policy as 20 “processors”), the list price of “Oracle BI Foundation Suite” (BIFS) is reduced by 33% from $450K per processor to $300K per processor. Capacity-based licensing on Exalytics (Trusted Partitions) “Capacity-based pricing” for the BIFS, Endeca, Essbase and Times Ten for Exalytics software is now available for Exalytics systems. This is delivered using “Oracle VM” (OVM).  We still ship a full Exalytics machine to all customers, but they may choose to only use and license a subset of the processors installed in the machine.   Customers can license Exalytics software in units of 5 “processors”: 5, 10, 15 or the full capacity 20.   As the customer’s implementation and workload increases, it is a simple matter to license additional processors and, using OVM, make them available to the BI or EPM application. Endeca Information Discovery now available on Exalytics Oracle has also announced the certification of “Oracle Endeca Information Discovery” (EID) on the Exalytics machine.    EID can be licensed alone or in combination with the BIFS & Times Ten for an Exalytics stack, and also participates in the capacity based pricing outlined above.   The Exalytics hardware is the perfect platform for EID, and provides superb power and performance for this in-memory hybrid text-search-analytics.   For more information : Oracle Price lists Oracle Partitioning Policy Discussion by Mark Rittman (Rittman Mead Consulting ltd.) on Oracle Trusted Partitions for Oracle Engineered Systems, Oracle Exalytics and Updated BI Foundation Pricing.

    Read the article

  • Need help eliminating dead code paths and variables from C source code

    - by Anjum Kaiser
    I have a legacy C code on my hands, and I am given the task to filter dead/unused symbols and paths from it. Over the time there were many insertions and deletions, causing lots of unused symbols. I have identified many dead variables which were only being written to once or twice, but were never being read from. Both blackbox/whitebox/regression testing proved that dead code removal did not affected any procedures. (We have a comprehensive test-suite). But this removal was done only on a small part of code. Now I am looking for some way to automate this work. We rely on GCC to do the work. P.S. I'm interested in removing stuff like: variables which are being read just for the sake of reading from them. variables which are spread across multiple source files and only being written to. For example: file1.c: int i; file2.c: extern int i; .... i=x;

    Read the article

  • Co-worker uses ridiculous commenting convention, how to cope? [closed]

    - by Jessica Friedman
    A co-worker in the small start-up I work at writes (C++) code like this: // some class class SomeClass { // c'tor SomeClass(); // d'tor ~SomeClass(); // some function void someFunction(int x, int y); }; // some function void SomeClass::someFunction(int x, int y) { // init worker m_worker.init(); // log LOG_DEBUG("Worker initialized"); // find current cache auto it = m_currentCache.find(); // flush if (it->flush() == false) { // return return false } // return return true } This is how he writes 100% of his code: a spacer line, a useless comment which says nothing other than what is plainly stated in the following statement, and the statement itself. This is absolutely driving me insane. A simple class written by him spans 3 times as much as it's supposed to, It looks well commented but the comments contain no new information. In fact the code is completely undocumented in any normal definition of "documentation". All of the comments are just a repetition of what is written in C++ in the following line. I've confronted him several times about it and each time he seems to understand what I am saying but then goes on to not change his coding and not fix old code which is written like this. I've went on and on again and again about the distinct disadvantages of writing code like this but nothing get through to him. Other co-workers doesn't seem to mind it as much and management doesn't seem to really care. What do I do? (sorry for the rant)

    Read the article

  • XNA 4.0, Combining model draw calls

    - by MayContainNuts
    I have the following problem: The levels in my game are made up of a Large Quantity of small Models and because of that I am experiencing frame rate problems. I already did some research and came to the conclusion that the amount of draw calls I am making must be the root of my problems. I've looked around for a while now and couldn't quite find a satisfying solution. I can't cull any of those models, in a worst case scenario there could be 1000 of them visible at the same time. I also looked at Hardware geometry Instancing, but I don't think that's quite what I'm looking for, because the level consists of a lot of different parts. So, what I'd like to do is combining 100 or 200 of these Models into a single large one and draw it as a whole 'chunk'. The whole geometry is static so it wouldn't have to be changed after combining, but different parts of it would have to use different textures (I think I can accomplish that with a texture atlas). But I have no idea how to to that, so does anybody have any suggestions?

    Read the article

  • Skynet Big Data Demo Using Hexbug Spider Robot, Raspberry Pi, and Java SE Embedded (Part 3)

    - by hinkmond
    In Part 2, I described what connections you need to make for this demo using a Hexbug Spider Robot, a Raspberry Pi, and Java SE Embedded for programming. Here are some photos of me doing the soldering. Software engineers should not be afraid of a little soldering work. It's all good. See: Skynet Big Data Demo (Part 2) One thing to watch out for when you open the remote is that there may be some glue covering the contact points. Make sure to use an Exacto knife or small screwdriver to scrape away any glue or non-conductive material covering each place where you need to solder. And after you are done with your soldering and you gave the solder enough time to cool, make sure all your connections are marked so that you know which wire goes where. Give each wire a very light tug to make sure it is soldered correctly and is making good contact. There are lots of videos on the Web to help you if this is your first time soldering. Check out Laday Ada's (from adafruit.com) links on how to solder if you need some additional help: http://www.ladyada.net/learn/soldering/thm.html If everything looks good, zip everything back up and meet back here for how to connect these wires to your Raspberry Pi. That will be it for the hardware part of this project. See, that wasn't so bad. Hinkmond

    Read the article

  • When mapping the surface of a sphere with tiles, how might you deal with polar distortion?

    - by clweeks
    It's easy to deal with the way locations interact on a clean Cartesian grid. It's just vanilla math. And you can kind of ignore the geometry of the sphere's surface for a bunch of it if you want to just truncate the poles or something. But I keep coming up with ideas for games where the polar space matters. Geo-coded ARGs and global roguelikes and stuff. I want square(ish?) locations -- reasonably representable by square tiles of the same size across the globe, anyway. This has to be a solved problem, right? What are the solutions? ETA: At the equator -- and assuming that your square locations are reasonably small, it's close enough to true that you can get away with having one square in the rows north and south of the most equatorial row. And you could probably get away with that by just hand-waving the difference up to like 45-degrees or so. But eventually, you need to have fewer squares in a pole-ward circumferential row. If I reduce the length of the row by one and offset the squares by 1/2 then they're just like hexes and it's relatively easy to do the coding to keep track of the connections. But as you get pole-ward, it gets more and more extreme. Projecting the surface of the world onto the surface of a cube is tempting. But I figured there must be more elegant solutions already in use. If I did the cube thing (not dissecting it further through geodesy) Are there any pros and cons related to placing the pole at the center of a face or at the vertex of three sides?

    Read the article

  • Most effective way to do daily standup meeting when a few people are remote

    - by Burhan Ali
    I am a software developer in a small team of seven. We are not an Agile (with a big 'A') team but are experimenting with some aspects of agile. One of these is the daily "standup" meeting. The difficulty here is that for two days of the week we have at least one person working from home so the full team isn't available in the same room. What is the best way to carry out a daily standup in this situation? Some facts that may be relevant: We all work in a single open plan room. We use Skype in our company. We don't have any video conferencing capability. We all work the same hours so there are no timezone complexities involved. The development manager is one of the people who works from home one day a week. Things we have tried: Conference call using Skype: This is tricky for those in the office because you can hear people speak in the room and then a split second later through the headset. This can e very distracting. Conference phone: Awful experience. Hard to get them to work and poor quality audio. Text-based updates using Skype. This is not as engaging and is no different than just firing off a status email in the morning. I have seen other questions about remote collaboration but they are mainly about completely remote teams and/or teams that span multiple time zones. We are not affected by either of these problems. What can we do to make our standup meetings better in these circumstances?

    Read the article

  • What data should be cached in a multiplayer server, relative to AI and players?

    - by DevilWithin
    In a virtual place, fully network driven, with an arbitrary number of players and an arbitrary number of enemies, what data should be cached in the server memory, in order to optimize smooth AI simulation? Trying to explain, lets say player A sees player B to E, and enemy A to G. Each of those players, see player A, but not necessarily each other. Same applies to enemies. Think of this question from a topdown perspective please. In many cases, for example, when a player shoots his gun, the server handles the sound as a radial "signal" that every other entity within reach "hear" and react upon. Doing these searches all the time for a whole area, containing possibly a lot of unrelated players and enemies, seems to be an issue, when the budget for each AI agent is so small. Should every entity cache whatever enters and exits from its radius of awareness? Is there a great way to trace the entities close by without flooding the memory with such caches? What about other AI related problems that may arise, after assuming the previous one works well? We're talking about environments with possibly hundreds of enemies, a swarm.

    Read the article

  • Secure Coding Practices in .NET

    - by SoftwareSecurity
    Thanks to everyone who helped pack the room at the Fox Valley Day of .NET.   This presentation was designed to help developers understand why secure coding is important, what areas to focus on and additional resources.  You can find the slides here. Remember to understand what you are really trying to protect within your application.  This needs to be a conversation between the application owner, developer and architect.  Understand what data (or Asset) needs to be protected.  This could be passwords, credit cards, Social Security Numbers.   This also may be business specific information like business confidential data etc.  Performing a Risk and Privacy Assessment & Threat Model on your applications even in a small way can help you organize this process. These are the areas to pay attention to when coding: Authentication & Authorization Logging & Auditing Event Handling Session and State Management Encryption Links requested Slides Books The Security Development Lifecycle: SDL: A Process for Developing Demonstrably More Secure Software Threat Modeling Writing Secure Code The Web Application Hackers Handbook  Secure Programming with Static Analysis   Other Resources: OWASP OWASP Top 10 OWASP WebScarab OWASP WebGoat Internet Storm Center Web Application Security Consortium Events: OWASP AppSec 2011 in Minneapolis

    Read the article

  • Best peer-to-peer game architecture

    - by Dejw
    Consider a setup where game clients: have quite small computing resources (mobile devices, smartphones) are all connected to a common router (LAN, hotspot etc) The users want to play a multiplayer game, without an external server. One solution is to host an authoritative server on one phone, which in this case would be also a client. Considering point 1 this solution is not acceptable, since the phone's computing resources are not sufficient. So, I want to design a peer-to-peer architecture that will distribute the game's simulation load among the clients. Because of point 2 the system needn't be complex with regards to optimization; the latency will be very low. Each client can be an authoritative source of data about himself and his immediate environment (for example bullets.) What would be the best approach to designing such an architecture? Are there any known examples of such a LAN-level peer-to-peer protocol? Notes: Some of the problems are addressed here, but the concepts listed there are too high-level for me. Security I know that not having one authoritative server is a security issue, but it is not relevant in this case as I'm willing to trust the clients. Edit: I forgot to mention: it will be a rather fast-paced game (a shooter). Also, I have already read about networking architectures at Gaffer on Games.

    Read the article

  • Is it a bad idea to release software on the night before Christmas?

    - by Conor
    We're about to go live with a new version of our system. It's getting really close to Christmas. I work in a very small company. Everyone will be on leave or sporadically available over the next week or so. I've argued with my boss that this is very risky and that we should go live in the new year when everyone is back and when we can provide full support. He is unflinching - he argues that we need to go live sooner - so that we can get new users and more revenue which we need. The number of new users will be minor amount over the next week or so. There has been a decent amount of system testing performed on the system. However a new live system, in my experience, needs a lot of care and attention in the first few days. Am I being pessimistic or realistic? Update - January: The system did not go live over Christmas. Ongoing system testing revealed various problems. So no support issues to deal with. Still preparing for release...

    Read the article

  • Social-network, online community, company and job reviews, salaries statistics and much more.. Do we have it? Do we need it?

    - by Vlad Lazarenko
    I have many friends from Ukraine who are programmers. So I found out that they have a web site that collects, organizes and analyzing information about IT companies, which includes location, feedbacks, company reviews from current and former employees etc. They also collect programming salaries and organize them by language, region etc. That web site is ran by programmers and for programmers, all information is absolutely public and free. Plus, web site has forums, and people can discuss (more or less social than specific programming stuff) things, publish articles, news etc. I personally think that is useful, especially for those who are new in this industry. For example, you may do a small research and find out that, for example, Java programmers getting paid more than PHP programmers but demand is lower. Or you get an offer from the company, is about to accept it, but read reviews and find out that they don't even provide internet access at work and if you need to download something, you have to ask your manager to do it for you, and managers share a single computer that has internet connection to get that stuff for you (there is only one such company in Kiev, Ukraine, called SMK, for Software Mac Kiev, a big shame). So the question is - do we have something like it in US? Or at least, say, for New York region? Or state? All information I managed to find online is inaccurate or not full. Forums are very specific. If we don't have it, would you be interested in creating such a portal? Thanks!

    Read the article

  • Who is a CMS really for?

    - by Eirc man
    I have started lately discovering Content Management Systems, and I was wondering, who is really CMS for? What I mean by that: is it only for companies, small businesses or individuals, that pays a contractor to make a website that it's users can just upload content through a easy interface. Or is it used also by programmers, to build their own websites, projects? Would a Facebook, Tweeter, StackExhange ever started by using a CMS, a very powerful one for example. Would you as a programmer build your own "fancy" website on top of a CMS, for example like Typo3, or you would build it from scratch? P.S To be more clear is a summary: What I mean to begin with is, would I as a developer choose a CMS to develop a website that can be scaled with a big base of users, be stuck if I choose to start with a CMS system. What if I build a website using CMS, and the website explodes in popularity, and then I wanted to add much more functionality that I have planed, is it possible that the CMS will limit the growth, because it might have not been build for that kind of scale?

    Read the article

  • Tell Us Once&ndash;Final Phase goes live

    - by BizTalk Visionary
    Yesterday the final phase of ‘Tell Us Once’ went live. This completes the 4 1/2 year journey Solidsoft started on this cross government project with the addition of full electronic distribution of data and the most import piece – access for the citizen to use the service on-line. Tell Us Once (TUO) is the award-winning, cross-government programme that lets people inform central government and local authorities just once of a birth or death. In service in over 95% of councils in England, Scotland and Wales, it provides a permanent solution to the long-standing and frustrating issue of people having to notify the government multiple times. Several years ago, research showed that people had to make up to 44 contacts when reporting a death to government bodies and their local authority. The TUO service is offered as a face-to-face interview by the local authority or by telephone to a dedicated telephony service run by the Department for Work and Pensions (DWP). and a  now a TUO online service for death. From the bereavement section of the  Direct Gov web site the citizen is able to ‘enrich’ the standard death registration data to allow the ‘Tell Us Once’ system inform the various government departments about the death. These include the local council, DVLA, DWP, Passport service and HMRC. For the record this is an excellent example of how an SME working with a large SI partner can deliver success for government in a responsive and agile manner. For me personally it is a proud moment in which a vision I started with a very small team was followed through, extended and finally delivered by an excellent team at Solidsoft.

    Read the article

  • What is the target of Unity?

    - by burli
    First Unity was developed for Netbooks. But the Netbook Market is shrinking. Unity is not specialized for tablet pcs like Android 3, but it may work well with some specialized Apps for those devices. Unity is still nice for Notebooks with small displays, but there is no big advantage on the desktop compared with other desktop environments like Gnome 2/3 or KDE. So what's the point? My first suggenstion was a hybrid between tablet pc and a desktop, for example for a manager. He can plug the tablet in a docking station in his office and he can work at a normal desktop, whats not possible with iOS or Android. If he is in a meeting he can use it as a tablet to make notes, for example. Or if he is somewhere else outside the office or the company. Same for normal users. They can dock the tablet and use it like a normal desktop pc or they can lie on the couch and browse in the web, read a book or chat with friend. So, thats my suggestion. But what is the real plan for Unity or Ubuntu in general? I'm curious ;)

    Read the article

  • Take care to unhook Anonymous Delegates

    - by David Vallens
    Anonymous delegates are great, they elimiante the need for lots of small classes that just pass values around, however care needs to be taken when using them, as they are not automatically unhooked when the function you created them in returns. In fact after it returns there is no way to unhook them. Consider the following code.   using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Diagnostics; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { SimpleEventSource t = new SimpleEventSource(); t.FireEvent(); FunctionWithAnonymousDelegate(t); t.FireEvent(); } private static void FunctionWithAnonymousDelegate(SimpleEventSource t) { t.MyEvent += delegate(object sender, EventArgs args) { Debug.WriteLine("Anonymous delegate called"); }; t.FireEvent(); } } public class SimpleEventSource { public event EventHandler MyEvent; public void FireEvent() { if (MyEvent == null) { Debug.WriteLine("Attempting to fire event - but no ones listening"); } else { Debug.WriteLine("Firing event"); MyEvent(this, EventArgs.Empty); } } } } If you expected the anonymous delegates do die with the function that created it then you would expect the output Attempting to fire event - but no ones listeningFiring eventAnonymous delegate calledAttempting to fire event - but no ones listening However what you actually get is Attempting to fire event - but no ones listeningFiring eventAnonymous delegate calledFiring eventAnonymous delegate called In my example the issue is just slowing things down, but if your delegate modifies objects, then you could end up with dificult to diagnose bugs. A solution to this problem is to unhook the delegate within the function var myDelegate = delegate(){Console.WriteLine("I did it!");}; MyEvent += myDelegate; // .... later MyEvent -= myDelegate;

    Read the article

  • Leveraging Code Across Platforms in Ever Bigger Games

    - by ashes999
    Summary: The same way that I continually build complex engines and libraries within a single platform and technology to allow me to build increasingly bigger and better games, how to continue this when development crosses into different platforms? If I switch platforms, how do I leverage past code and experiences? Games are hard to build. Big games are even harder to build. I've decided that to be able to make big games, I need to start building smaller games, and building up an asset base of code, assets (graphics, sounds), tools, and most importantly, game engines, so that I can eventually get there. One game at a time. Let me give an analogy. To build an MMO 3D RPG, I would approach this by building and releasing small games with increasingly more features. This could entail, for example: A simple 2D game A tile-based game A game with RPG elements (items, equipment, monsters, battle) A full-fledged RPG A 3D RPG The problem now is if I have to change platforms or tools, I don't know how to leverage past code-bases (and experience) to start with a mature product. Right now, I'm writing Silverlight (FlatRedBall) games. Let's say I stick with this for ten years, and then suddenly decide to write a PS6 game, which is in a different programming language entirely. Granted, I have ten years of game-development experience (and correspondingly ten years of professional software development experience from my day job) to back me up. But I would still like some way to transplant that 2D RPG engine into the new programming language, or else leverage it somehow. Is this even possible? What are my options?

    Read the article

  • Unable to install ubuntu on a AMD 64 bit system with a AMD Radeon HD 6670 graphics card

    - by Tom Wingrove
    I’ve been running a dual boot system (Ubuntu/Windows7) for two years or so with no problems. I recently built an AMD 64 Bit System, re-installed Windows but when I went to load Ubuntu inside Windows, hit a snag. The screen view during installation became small square blocks of colour, which obviously is a graphics drive problem. I tried various live disks both 32 & 64 bit for, Ubuntu 12.04, 11.10 & 10.10, all but Ubuntu 10.10 had the same problem. Ubuntu 10.10 loaded ok, installed the presented ATI graphics driver as usual but was left with the AMD Unsupported watermark at the bottom right of the screen. The graphics card installed in the computer is an MSI ATI Radeon HD 6670 (in effect an AMD Radeon HD 6670). I am fairly new to linux and while I can install and tweak the OS, I am rather baffled as to what to do. So my question is will an up to date ATI Driver be released in the near future for installation/live disks? Or am I going to have to downgrade my graphics card to use linux? Yours Tom Wingrove

    Read the article

  • SQL SERVER – FIX ERROR – Cannot connect to . Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. (Microsoft SQL Server, Error: 18452)

    - by pinaldave
    Just a day ago, I was doing small attempting to connect to my local SQL Server using IP 127.0.0.1. The IP is of my local machine and SQL Server is installed on the local box as well. However, whenever I try to connect to the server it gave me following strange error. Cannot connect to 127.0.0.1. Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. (Microsoft SQL Server, Error: 18452) The reason was indeed strange as I was trying to connect from local box to local box and it said my login was from an untrusted domain. As my system is not part of any domain, this was really confusing to me. Another thing was that I have been always able to connect always using 127.0.0.1 to SQL Server and this was a bit strange to me. I started to think what did I change since it  last time I connected to SQL Server. Suddenly I remembered that I had modified my computer’s host file for some other purpose. Solution: I opened my host file and immediately added entry like 127.0.0.1 localhost. Once I added it I was able to reconnect to SQL Server as usual. The location of the host file is C:\Windows\System32\drivers\etc. You will find file with the name hosts in it, make sure to open it with notepad. If you are part of a domain and your organization is using active directory, make sure that your account is added properly to active directory as well have proper security permissions to execute the task. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 276 277 278 279 280 281 282 283 284 285 286 287  | Next Page >