Search Results

Search found 40226 results on 1610 pages for 'object relational model'.

Page 587/1610 | < Previous Page | 583 584 585 586 587 588 589 590 591 592 593 594  | Next Page >

  • How to render 3D models as SVG vector graphics? (planar projection)

    - by Jan
    This image (original SVG from Wikipedia, public domain) was created using the following procedure: Create a 3D model in Google sketchup Export as PDF Import in Inkscape Save as SVG Is there a straightforward way to produce such a SVG with software that runs (natively) on Ubuntu? (Pantograph, a Blender plugin, has only broken download links; VRM, another Blender plugin works with Belnder 2.4x, but not with Blender 2.6x.)

    Read the article

  • Defining a service layer: the text-based adventure

    - by Stacy Vicknair
    Applications these days have more options than ever for a user interface, and it’s only going to grow. A successful product might require native applications for mobile devices, a regular web implementation, or even a gaming console. These systems often will be centralized and data driven. The solution is one that’s fairly solitary, a service layer! Simply put, take what’s shared and put it behind a physical or abstract layer that defines the boundary between the specific user interface and the shared content.   I know, I know, none of this is complicated. But some times it can be difficult to discern what belongs on which side of the line. For instance, say we’re creating a service that will provide content for both an ASP.NET MVC application and a WP7 application. Although the content served to each application is the same, there are different paradigms and patterns for displaying that data in the different environments. In ASP.NET MVC, you may create a model specific to a page that combines necessary information. In the WP7 application you might require different sets of data that you will connect via MVVM with the view. The general rule of thumb is that any shared content, business rules, or data should exist separately. Any element that is specific to the current UI implementation should be included in a separate library or with the UI implementation itself. The WP7 application doesn’t need my MVC specific model classes. My MVC application doesn’t require those INotifyPropertyChanged viewmodels that the WP7 application depends on. In both cases, there should be additional processing done above the service layer to massage the data to the application’s specific needs.   Service-ocalypse: the text based adventure What helps me the most about deciding whether or not something belongs coupled to the UI implementation or in the shared implementation is thinking of the simplest implementation you could have: a console application. You might have played a game like Peasant’s Quest: The console app is the text based adventure game version of your application. If you’re service was consumed in its simplest form, you would simply have a console based API for it that issues requests. Maybe those requests aren’t SWIM TO BOAT, but they might be CREATE USER JOHN. If I issue a request, I expect that request to be issued to the service. If the service has any exceptions or issues with my input, that business logic should be encapsulated in that service, not implemented in the UI. The service layer should be your functional application in its entirety, and anything above that layer should only assist with the display of that information.

    Read the article

  • Keypress Left is called twice in Update when key is pressed only once

    - by Simran kaur
    I have a piece of code that is changing the position of player when left key is pressed. It is inside of Update() function. I know, Update is called multiple times, but since I have an ifstatement to check if left arrow is pressed, it should update only once. I have tested using print statement that once pressed, it gets called twice. Problem: Position updated twice when key is pressed only once. Below given is the structure of my code: void Update() { if (Input.GetKeyDown (KeyCode.LeftArrow)) { print ("PRESSEEEEEEEEEEEEEEEEEEDDDDDDDDDDDDDD"); } } I looked up on web and what was suggested id this: if (Event.current.type == EventType.KeyDown && Event.current.keyCode == KeyCode.LeftArrow) { print("pressed"); } But, It gives me an error that says: Object reference not set to instance of an object How can I fix this?

    Read the article

  • How to choose how to store data?

    - by Eldros
    Give a man a fish and you feed him for a day. Teach a man to fish and you feed him for a lifetime. - Chinese Proverb I could ask what kind of data storage I should use for my actual project, but I want to learn to fish, so I don't need to ask for a fish each time I begin a new project. So, until I used two methods to store data on my non-game project: XML files, and relational databases. I know that there is also other kind of database, of the NoSQL kind. However I wouldn't know if there is more choice available to me, or how to choose in the first place, aside arbitrary picking one. So the question is the following: How should I choose the kind of data storage for a game project? And I would be interested on the following criterion when choosing: The size of the project. The platform targeted by the game. The complexity of the data structure. Added Portability of data amongst many project. Added How often should the data be accessed Added Multiple type of data for a same application Any other point you think is of interest when deciding what to use. EDIT I know about Would it be better to use XML/JSON/Text or a database to store game content?, but thought it didn't address exactly my point. Now if I am wrong, I would gladely be shown the error in my ways.

    Read the article

  • Oracle Virtualization Friday Spotlight - November 8, 2013

    - by Monica Kumar
    Hands-on Private Cloud Simulator In One Hour Submitted by: Doan Nguyen, Senior Principal Product Marketing Director My aeronautics instructor used to say, "you can’t appreciate flying until you take flight." To clarify, this is not about gearing up in a flying squirrel suit and hopping off a cliff (topic for another blog!) but rather about flying an airplane. The idea is to get hands-on with the controls at the cockpit and experience flight before you actually fly a real plane. After the initial 40 hours of flight time, the concept sank in and it really made sense.This concept is what inspired our technical experts to put together the hands-on lab for a private cloud deployment and management self-service model. Yes, we are comparing the lab to a flight simulator! Let’s look at the parallels: To get trained to fly, starting in the simulator gets you off the ground quicker. There is no need to have a real plane to begin with. In a hands-on lab, there is no need for a real server, with networking and real storage installed. All you need is your laptop The simulator is pre-configured, pre-flight check done. Similarly, in a hands-on lab, Oracle VM and Oracle Enterprise Manager are pre-configured and assembled using Oracle VM VirtualBox as the container. Software installations are not needed. After time spent training at the controls, you can really appreciate the practical experience of flying. Along the same lines, the hands-on lab is a guided learning path, without the encumbrances of hardware, software installation, so you can learn about cloud deployment and management.  However, unlike the simulator training, your time investment with the lab is only about an hour and not 40 hours! This hands-on lab takes you through private cloud deployment and management using Oracle VM and  Oracle Enterprise Manager Cloud Control 12c in an Infrastructure as a service IaaS model. You will first configure the IaaS cloud as the cloud administrator and then deploy guest virtual machines (VMs) as a self-service user. Then you are ready to take flight into the cloud! Why not step into the cockpit now!

    Read the article

  • Is that possible to natively boot Ubuntu Touch on a PC, especially on a Surface Pro?

    - by Jules P.
    I know the question has already been asked, but the link for the instructions in the answer is outdated and the proof of concept video doesn't match with what I'm looking for. I've got a Surface Pro (1st model), I've already ran Ubuntu and Android on it several times with UnetBootin, so the question is : Is there a way to natively run Ubuntu Touch (By natively I mean by directly booting it, even on a USB key, but no emulation, no virtualization) on such a device ?

    Read the article

  • design practice for business layer when supporting API versioning

    - by user1186065
    Is there any design pattern or practice recommended for business layer when dealing with multiple API version. For example, I have something like this. http://site.com/blogs/v1/?count=10 which calls business object method GetAllBlogs(int count) to get information http://site.com/blogs/v2/?blog_count=20 which calls business object method GetAllBlogs_v2(int blogCounts) Since parameter name is changed, I created another business method for version 2. This is just one example but it could have other breaking changes for which it requires me to create another method to support both version. Is there any design pattern or best practice for business/data access layer I should follow when supporting API Versioning?

    Read the article

  • How to get the Dash and HUD to appear. (and stop Unity spewing error messages.)

    - by Ubuntiac
    I just installed Ubuntu 12.04 on my wifes Dell Inspiron 1501, which uses an R300 ATI graphics chip. Neither the Dash or HUD appear when pushing the appropriate key. When I try unity --reset & in the terminal, I see that over and over it's spitting out: r300: CS space validation failed. (not enough memory?) Skipping rendering. This is just after starting Ubuntu with no apps open, so I find it hard to believe that just rendering the Dash / HUD is completely blowing out the VRAM. Any suggestions on getting this working? /usr/lib/nux/unity_support_test -p shows OpenGL vendor string: X.Org R300 Project OpenGL renderer string: Gallium 0.4 on ATI RS480 OpenGL version string: 2.1 Mesa 8.0.2 Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: yes All sections say "YES"

    Read the article

  • Do I need "cube subclasses" to represent the blocks in a Minecraft-like world?

    - by stighy
    I would like to try to develop a very simple game like Minecraft for my own education. My main problem at the moment is figuring out how to model classes that represent the world, which will be made of blocks of various types (such as dirt, stone and sand). I am thinking of creating the following class structure: Cube (with proprerties like color, strength, flammable, gravity) with subclasses: Dirt Stone Sand et cetera My question is, do I need the Cube subclasses or a single class Cube sufficient?

    Read the article

  • ODBC in SSIS 2012

    - by jamiet
    In August 2011 the SQL Server client team published a blog post entitled Microsoft is Aligning with ODBC for Native Relational Data Access in which they basically said "OLE DB is the past, ODBC is the future. Deal with it.". From that blog post:We encourage you to adopt ODBC in the development of your new and future versions of your application. You don’t need to change your existing applications using OLE DB, as they will continue to be supported on Denali throughout its lifecycle. While this gives you a large window of opportunity for changing your applications before the deprecation goes into effect, you may want to consider migrating those applications to ODBC as a part of your future roadmap.I recently undertook a project using SSIS2012 and heeded that advice by opting to use ODBC Connection Managers rather than OLE DB Connection Managers. Unfortunately my finding was that the ODBC Connection Manager is not yet ready for primetime use in SSIS 2012. The main issue I found was that you can't populate an Object variable with a recordset when using an Execute SQL Task connecting to an ODBC data source; any attempt to do so will result in an error:"Disconnected recordsets are not available from ODBC connections." I have filed a bug on Connect at ODBC Connection Manager does not have same funcitonality as OLE DB. For this reason I strongly recommend that you don't make the move to ODBC Connection Managers in SSIS just yet - best to wait for the next version of SSIS before doing that.I found another couple of issues with the ODBC Connection Manager that are worth keeping in mind:It doesn't recognise System Data Source Names (DSNs), only User DSNs (bug filed at ODBC System DSNs are not available in the ODBC Connection Manager)  UPDATE: According to a comment on that Connect item this may only be a problem on 64bit.In the OLE DB Connection Manager parameter ordinals are 0-based, in the ODBC Connection Manager they are 1-based (oh I just can't wait for the upgrade mess that ensues from this one!!!)You have been warned!@jamiet

    Read the article

  • Using Instance Nodes, worth it?

    - by Twitch
    I am making a 2d game where there are various environments with lots and lots of objects. There is a forest scene with like 1200 objects in total(trees mainly), of which around 100 are visible on the camera at any given time, as you move through the level. These are comprised of around 20 different kind of trees and other props. Each object is usually 2-6 triangles with a transparent texture. My developer asked me to replace each object in the scene with a node, and keeping only a minimal amount of actual objects which would be 300+ or so(?), since there are a few modified unique meshes. So he can instantiate the actual objects to keep the game light. Is this actually effective? And if so how much? I 've read about draw calls and such and I suppose that if I combine each texture (10 kinds of trees) in 1 mesh it will have the same effect?

    Read the article

  • Looking for a non-cryptographic hash function that returns a single character

    - by makerofthings7
    Suppose I have a dictionary of ASCII words stored in uppercase. I also want to save those words into separate files so that the total word count of each file is approximately the same. By simply looking at the word I need to know which file it should be in (if it's there at all). Duplicate words should go into the same file and overwrite the last one. My first attempt at solving this problem is to use .NET's object.GetHashCode() function and .Trim() to get one of the "random" characters that pop up. I asked a similar question here If I only use one character of object.GetHashCode() I would get a hash code character of A..Z or 0..9. However saving the result of GetHashCode to disk is a no-no so I need a substitute. Question: What algorithm (or subset of an algorithm) is appropriate for pigeonholing strings into a single character or range of characters (Like hex 0..F offers 16 chars)? Real world usage: I'll use this answer to modify the Partition key used in Azure Table storage as described here

    Read the article

  • Windows Metro: The hardest Hello World example I have ever seen :P

    - by Rob Addis
    http://msdn.microsoft.com/en-us/library/windows/apps/hh986965.aspx  Contrast that with Hello World in assembler on Windows:.386.model flat, stdcalloption casemap :noneextrn MessageBoxA@16 : PROCextrn ExitProcess@4 : PROC.data        HelloWorld db "Hello World!", 0.codestart:        lea eax, HelloWorld        mov ebx, 0        push ebx        push eax        push eax        push ebx        call MessageBoxA@16        push ebx        call ExitProcess@4end start

    Read the article

  • Wireless card unseen by lspci

    - by al-Amjad Tawfiq Isstaif
    I installed both Ubuntu 8.04 and 10.04. I tried to install the wireless card. although I succeeded in installing the driver using ndiswrapper, it tells me that the hardware is not present. When I use lspci, it doesn't list it. I have this laptop model and I think it should have the same entry about the wireless card here: http://ubuntuforums.org/showthread.php?t=1810193 Could the problem be other than hardware malfunction?

    Read the article

  • BonaVista Dimensions used as a report service

    - by Marco Russo (SQLBI)
    Recently I have seen a long demo of BonaVista Dimensions . It is a product that is able to create reports and, most important dashboards. You can use it also without SQL Server and Analysis Services, just by importing data in a local cube file that you can model using its own simple to use user interface. But what is interesting to me (in this post) is the capability to connect to a SSAS cube. It is somewhat similar to XLCubed and in reality these two products have something in common, because both...(read more)

    Read the article

  • The old "do as I say, not as I do" problem

    - by AaronBertrand
    Microsoft is often considered a leader, an innovator, a trend-setter. The same could be said for Apple, Google, and a host of other tech companies. And each of those has its set of critics as well, who think that the company is the opposite - or worse. Some people think it is a good idea to model their own code, architecture or applications after things that these companies have done, but this is not always the best approach. Humans work at these companies too, and everyone is prone to mistakes,...(read more)

    Read the article

  • OSB, Service Callouts and OQL - Part 3

    - by Sabha
    In the previous sections of the "OSB, Service Callouts and OQL" series, we analyzed the threading model used by OSB for Service Callouts and analysis of OSB Server threads hung in Service callouts and identifying  the Proxies and Remote services involved in the hang using OQL. This final section of the series will focus on the corrective action to avoid Service Callout related OSB Server hangs. Please refer to the blog post for more details.

    Read the article

  • Cloud Infrastructure has a new standard

    - by macoracle
    I have been working for more than two years now in the DMTF working group tasked with creating a Cloud Management standard. That work has culminated in the release today of the Cloud Infrastructure Management Interface (CIMI) version 1.0 by the DMTF. CIMI is a single interface that a cloud consumer can use to manage their cloud infrastructure in multiple clouds. As CIMI is adopted by the cloud vendors, no more will you need to adapt client code to each of the proprietary interfaces from these multiple vendors. Unlike a de facto standard where typically one vendor has change control over the interface, and everyone else has to reverse engineer the inner workings of it, CIMI is a de jure standard that is under change control of a standards body. One reason the standard took two years to create is that we factored in use cases, requirements and contributed APIs from multiple vendors. These vendors have products shipping today and as a result CIMI has a strong foundation in real world experience. What does CIMI allow? CIMI is both a model for the resources (computing, storage networking) in the cloud as well as a RESTful protocol binding to HTTP. This means that to create a Machine (guest VM) for example, the client creates a “document” that represents the Machine resource and sends it to the server using HTTP. CIMI allows the resources to be encoded in either JavaScript Object Notation (JSON) or the eXentsible Markup Language (XML). CIMI provides a model for the resources that can be mapped to any existing cloud infrastructure offering on the market. There are some features in CIMI that may not be supported by every cloud, but CIMI also supports the discovery of which features are implemented. This means that you can still have a client that works across multiple clouds and is able to take full advantage of the features in each of them. Isn’t it too early for a standard? A key feature of a successful standard is that it allows for compatible extensions to occur within the core framework of the interface itself. CIMI’s feature discovery (through metadata) is used to convey to the client that additional features that may be vendor specific have been implemented. As multiple vendors implement such features, they become candidates to add the future versions of CIMI. Thus innovation can continue in the cloud space without being slowed down by a lowest common denominator type of specification. Since CIMI was developed in the open by dozens of stakeholders who are already implementing infrastructure clouds, I expect to CIMI being adopted by these same companies and others over the next year or two. Cloud Customers who can see the benefit of this standard should start to ask their cloud vendors to show a CIMI implementation in their roadmap.  For more information on CIMI and the DMTF's other cloud efforts, go to: http://dmtf.org/cloud

    Read the article

  • Equation / formula to determine an objects position on an ellipitcal path

    - by David Murphy
    I'm making a space game, as such I need objects to follow an elliptical path (orbit). I've worked out how to calculate all the important aspects of my orbits, the only remaining thing is how to have an object follow it. My Orbit class contains the major, minor (and by extension semi-major,semi-minor) lengths. The focii radius, area and circumference even. What is the equation to determine an objects x/y position (only need 2D) on an ellipse with a certain speed after a period of time. Basically, every frame I want to update the position based on the amount of elapsed time. I would like to have the speed along the path speed up and slow down according to the distance from the object it's orbiting, but not sure how to factor this in to the above given that at any point in time the speed has changed from it's previous speed. EDIT I can't answer my own question. But I found the question and answer is already on stackexchange: Kepler orbit : get position on the orbit over time

    Read the article

  • self referencing tables, good or bad?

    - by NimChimpsky
    Representing geographical locations within an application, the design of the underlying data model suggests two clear options (or maybe more?). One table with a self referencing parent_id column uk - london (london parent id = UK id) or two tables, with a one to many relationship using a foreign key. My preference is for one self-refercing table as it easily allows to extend into as many sub regions as required. IN general do people veer away from self referencing tables, or are they A-OK ?

    Read the article

  • Would Ubuntu support this Avell G1711?

    - by Bernardo
    I just bought a gaming notebook (model G1711) from a local brand in Brazil named Avell. Its configuration is quite advanced and this is the reason for my purchase. However, all of their official support relies on Windows 7 or 8, actually. So would Ubuntu work on this machine? It is an i5 Haswell, Chipset Intel HM87, Nvidia Geforce GTX 765M, sound with THX TruStudio Pro, Blu-ray writer, US layout keybord 101/102, USB 2 and 3.0, eSata port, 9 in one memory card reader.

    Read the article

  • Subwoofer doesn't work on Dell Inspiron 17R after upgrade to 13.10

    - by Danil Lopatin
    After upgrading from 13.04 to 13.10 Dell's Inspirion 17R subwoofer stoped working. In Ubuntu 13.04 was workaround by adding in the file /etc/modprobe.d/alsa-base.conf next line: options snd-hda-intel model=ref This issue was discussed here: How to activate subwoofer in Inspiron 17r? After update previous workarounds don't help and I get no sound from any speaker in this case. Is there some other fix for the latest version?

    Read the article

  • DOM Snitch : une extension de Google Chrome pour traquer les failles du code JavaScript, par détection heuristique

    DOM Snitch : une extension open source de Google Chrome pour traquer les failles Du code JavaScript par détection heuristique « DOM Snitch » est une nouvelle extension open source pour Chrome, destinée à aider les développeurs, testeurs et chercheurs en sécurité à débusquer les failles du code client des sites et applications Web. Cette extension développée par Google permet comme son nom l'indique, de suivre en temps réel l'évolution du DOM des pages Web (Document Object Model), sous l'action des différents scripts qui s'y exécutent. La fonctionnalité clé et le principal intérêt de Snitch résident dans ses capacités avancées de détection heuristique des failles, qu...

    Read the article

< Previous Page | 583 584 585 586 587 588 589 590 591 592 593 594  | Next Page >