Search Results

Search found 7554 results on 303 pages for 'shared secret'.

Page 136/303 | < Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >

  • What is the breakdown of jobs in game development?

    - by Destry Ullrich
    There's a project I'm trying to start for Indie Game Development; specifically, it's going to be a social networking website that lets developers meet through (It's a secret). One of the key components is showing what skills members have. Question: I need to know what MAJOR game development roles are not represented in the following list, keeping in mind that many specialist roles are being condensed into more broad, generalist roles: Art Animator (Characters, creatures, props, etc.) Concept Artist (2D scenes, environments, props, silhouettes, etc.) Technical Artist (UI artists, typefaces, graphic designers, etc.) 3D Artist (Modeling, rigging, texture, lighting, etc.) Audio Composer (Scores, music, etc.) Sound Engineer (SFX, mood setting, audio implementation, etc.) Voice (Dialog, acting, etc.) Design Creative Director (Initial direction, team management, communications, etc.) Gameplay Designer (Systems, mechanics, control mapping, etc.) World Designer (Level design, aesthetics, game progression, events, etc.) Writer (Story, mythos, dialog, flavor text, etc.) Programming Engine Programming (Engine creation, scripting, physics, etc.) Graphics Engineer (Sprites, lighting, GUI, etc.) Network Engineer (LAN, multiplayer, server support, etc.) Technical Director (I don't know what a technical director would even do.) Post Script: I have an art background, so i'm not familiar with what the others behind game creation actually do. What's missing from this list, and if you feel some things should be changed around how so?

    Read the article

  • Older PHP v/s newer PHP version [closed]

    - by Monty
    My company is building a website with database. Programmer's used PHP 5.0. My Service Provider (shared) in the meantime upgraded to PHP 5.3.0. Fixes have been on going and seem endless... Do I move to VPS and install older PHP or should we rebuild with newer PHP? When working remotely with programers what is the protocol regarding delivery of all code? Please what is the industry standard? I need an independent to review their work. How should this be approached?

    Read the article

  • Automatically Reset Theme To Default, SharePoint 2010

    - by KunaalKapoor
    Manually/Through UIOn the top link bar, click Site Settings.On the Site Management page, in the Customization section, click Apply theme to site.On the Apply Theme to Web Site page, select No Theme(Default) from the list.Click Apply.Through Scriptfunction Apply-SPDefaultTheme([string]$SiteUrl, [string]$webName){$site = new-object Microsoft.SharePoint.SPSite($SiteUrl)$web = $site.OpenWeb($webName)$theme = [Microsoft.SharePoint.Utilities.ThmxTheme]::RemoveThemeFromWeb($web,$false)$web.Update()$web.Dispose()$site.Dispose()}After looking in the SPTHEMES.XML file found in the C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\TEMPLATE\LAYOUTS\1033 folder, you do see there is a theme with a theme name of "none". Since there is no "default" theme in 2010. So make sure if you wanna reset it to default you know that there is no default, you need to select 'none' :)

    Read the article

  • RetinaPad Enables Retina Display for iPhone Apps on the iPad

    - by Jason Fitzpatrick
    RetinaPad is an iPad application that actives the Retina Display resolution on iPhone applications to increase the clarity on the iPad. It’s a feature that should be built-in but is currently only available for jailbroken iPads. The premise is simple. Currently iPads lack support for the Retina Display level resolution that iPhone apps are capable of if displaying. RetinaPad allows you to stop using the ugly and blocky simple doubling available on the iPad and start accessing the higher resolution Retina Display mode for iPhone applications on the iPad. It’s such a trivial thing that it’s outright shameful Apple doesn’t include it by default. You should have to jailbreak your device to unlock functionality that should be there right from the factory. Check out the demo video below to see it in action: Fire up your jailbroken iPad, launch Cydia and search for RetinaPad. Retina Pad is $2.99, iPad only. How to Enable Google Chrome’s Secret Gold IconHTG Explains: What’s the Difference Between the Windows 7 HomeGroups and XP-style Networking?Internet Explorer 9 Released: Here’s What You Need To Know

    Read the article

  • Oracle Private Cloud Solutions

    - by user462034
    To enable organizations to have complete control and visibility over security, compliance, and service levels, Oracle also helps organizations build, deploy, and manage their own cloud environments, including integrated application, platform, and infrastructure products and solutions. Oracle’s private cloud offerings include Oracle Cloud Applications. A complete and modular set of enterprise applications, engineered from the ground up to be cloud-ready and to coexist seamlessly in mixed environments. Oracle Cloud Platform. A shared and elastically scalable platform for consolidation of existing applications and new application development and deployment. Oracle Cloud Infrastructure. A complete selection of servers, storage, networking fabric, virtualization software, operating systems, and management software to support diverse public and private cloud applications. 

    Read the article

  • What could be a reason for cross-platform server applications developer to make his app work in multiple processes?

    - by Kabumbus
    We consider a server app development - heavily loaded with messing with big data streams. An app will be running on one powerful server. A server app will be developed in form of crossplatform application - working on Windows, Mac OS X and Linux. So same code, many platforms for stand alone server architecture. We wonder what are the benefits of distributing applications not only over threads but over processes as well, for programmers and server end users? Some people said to me that even having 48 cores, 4 process threads would be shared via OS through all cores, is that true?

    Read the article

  • Packing for JavaOne

    - by Tori Wieldt
    While you are packing for JavaOne, here are some things to remember to bring:1) A Jacket!While October is considered the summer in San Francisco, the heat only lasts a day or two. The fog can roll in any day, and it can be chilly (and maybe even rain).2) Your Oracle LoginMake sure your have your Oracle.com account log in details with you when you arrive onsite in San Francisco.  This is the username and password you used/created for your JavaOne 2012 registration.  You'll need these to check in and get your badge as well as to gain access to My Account and Schedule Builder onsite at the event. 3) Walking ShoesYou'll want comfortable and practical shoes as this city requires lots of walking and has lots of hills.4) Thumb DrivesWhen sharing cool code, nothing beats sneaker-net. That said, practice safe computing. 5) Consider Downloading a Ride-Sharing Service AppSideCar, Lyft, Uber and RelayRides are taking SF by storm, and are popular alternative to yellow taxis. These are unregulated ride-sharing services, so ride at your own risk. Hipster Tips for SF 1) Don't call it Frisco.2) If you wear shorts, don't complain about how cold it is.3) Bright colored clothes are for tourists. Locals wear black. 4) The most fun ice-cream flavors in town are at Humphry-Slocombe. Check out "secret breakfast."5) The Mission is hip.6) Don't expect there to be a Starbuck's or anything besides a great view at the other side of the Golden Gate bridge.7) SF has seasons, they are just more subtle.

    Read the article

  • Need Sql Server Hosting 50GB or More

    - by Leo
    Hi I am looking for a Hosting solution (Dedicated or Shared) which will allow me to host a SQL Server database service (Not SQL Express but the Web edition). The size of my database might grow to 50GB or more. The web application will offer more reads than write operations. I also need daily backups and raid 1 storage. Is there a reliable and economical hosting company that would provide this? Additional Question: If there is a easy way to host MS SQL on Amazon EC2 service, it will be preferable.

    Read the article

  • How to direct a Network Solutions domain name to an html website hosted on Google Drive? [on hold]

    - by Air Conditioner
    To begin with, I'd wanted to take advantage of HTML, CSS, and so on to build a website that looks and works just as I'd like it to. I took a look around on how I could make that work, and I soon saw a lifehacker article showing that its possible to host website files on google drive. I then made sure that the folder containing the files was shared publicly throughout the web, and I now have a working 'google drive hosted' domain for the website. However, I did want to have the custom domain, and so I registered one with network solutions. So now, I'm curious on how I should direct my Network Solutions domain to the index.html I'm hosting on google drive. Would anyone have an Idea?

    Read the article

  • Who should have full visibility of all (non-data) requirements information?

    - by ebyrob
    I work at a smallish mid-size company where requirements are sometimes nothing more than an email or brief meeting with a subject matter manager requiring some new feature. Should a programmer working on a feature reasonably expect to have access to such "request emails" and other requirements information? Is it more appropriate for a "program manager" (PGM) to rewrite all requirements before sharing with programmers? The company is not technology-centric and has between 50 and 250 employees. (fewer than 10 programmers in sum) Our project management "software" consists of a "TODO.txt" checked into source control in "/doc/". Note: This is nothing to do with "sensitive data access". Unless a particular subject matter manager's style of email correspondence is top secret. Given the suggested duplicate, perhaps this could be a turf war, as the PGM would like to specify HOW. Whereas WHY is absent and WHAT is muddled by the time it gets through to the programmer(s)... Basically. Should specification be transparent to programmers? Perhaps a history of requirements might exist. Shouldn't a programmer be able to see that history of reqs if/when they can tell something is hinky in the spec? This isn't a question about organizing requirements. It is a question about WHO should have full VISIBILITY of requirements. I'd propose it should be ALL STAKEHOLDERS. Please point out where I'm wrong here.

    Read the article

  • Auto-provisioning hosting via API

    - by user101289
    I've built a sort of 'software as a service' website package for a specific industry. What I am looking to do is create a payment gateway that allows users to subscribe-- and once the subscription is active, it would auto-provision a web hosting plan for them (a shared account on a server, probably in a chroot'd environment so each user would be insulated from others). Ideally it would auto-install a CMS as well. Tons of web hosts provide a simple reseller plan where I could manually create all the users' hosting accounts-- but so far none that I've found allow you to do this via API. Is there a way to do this short of writing custom shell scripts on something like an EC2 platform? I'd prefer to leave all the server maintenance in the hands of dedicated support staff rather than having to manually handle updates, backups, etc. Thanks for any tips.

    Read the article

  • Is there any way to simulate a slow connection between my server and an iPad (without installing anything on the server)?

    - by Clay Nichols
    Some of our webapp users have difficulty on slower connections. I"m trying to get a better idea of what that "speed barrier is" so I'd like to be able to test a variety of connection speeds. I've found ways to do this on Windows but no on the iPad, so I'm looking more for some sort of proxy service that'll work with any device (not running ON that device) I did find an article about using the CharlesProxy and providing a connection to another device, but I was hoping for something simpler (need not be free) Constraints * We are on a shared server so we can't install anything and we are limited in our control over that server. * I'd like to test an iPad, Android Tablet, Windows PC.

    Read the article

  • Accessing network shares through ASP.Net

    - by jkrebsbach
    In my impersonation enabled web site I needed to access files on a network share.  Running locally, everything worked fine. After deploying out to the dev server and hitting the web site from my PC, things fell apart. With impersonation enabled, we can access files on the server itself, but a network share is another story.  Accessing a share on another server, we encounter the infamous "double hop" situation, were the credentials have already been digested on the web server and are not available for the file server. We need to either expose the shared files to the identity IIS is running under, or create a new impersonation context.

    Read the article

  • Game Asset Storage: Archive vs Individual files

    - by David Colson
    As I am in the process of creating a 3D c++ game and I was wondering what would be more beneficial when dealing with game assets with regards to storage. I have seen some games have a single asset file compressed with everything in it and other with lots of little compressed files. If I had lots of individual files I would not need to load a large file at once and use up memory but the code would have to go about file seeking when the level loads to find all the correct files needed. There is no file seeking needed when dealing with one large file, but again, what about all the assets not currently needed that would get loaded with the one file? I could also have an asset file for each level, but then how do I deal with shared assets This has been bothering me for a while so tell me what other advantages and disadvantages are there to either way of doing things.

    Read the article

  • Is there a debian/ubuntu policy on softlinking things to another location in opt once they're installed?

    - by AbrahamVanHelpsing
    Is there a debian/ubuntu policy on softlinking things to another location in opt once they're installed properly in usr/share or usr/lib? Here's a simple example: Packaging up dnsenum. It's a REALLY simple package (4 files). A perl script, two wordlists, and a readme. So from what I gather: The wordlists should go in usr/share/dnsenum/* The perl script itself would go in usr/lib/dnsenum/ The readme would go in usr/share/doc/dnsenum/ Add a wrapper bash script that goes in bin and just passes arguments to dnsenum.pl. The question is this: If there are various tools that provide wordlists or some other shared resource, is there a policy on linking all the wordlists from different packages in to /opt/wordlists/ ? It seems like the "right" thing to do respecting the directory structure while still making things convenient.

    Read the article

  • Next Quarterly Customer Update Webcast is Nov 27th (Nov 28th in Asia Pacific)

    - by John Klinke
    Join the WebCenter team as we present the latest product direction that was recently shared at the Oracle OpenWorld conference in San Francisco last month.   This Oracle WebCenter Quarterly Customer Update Webcast is scheduled on Nov 27th (Nov 28th in Asia Pacific). We will also be sharing the latest product updates and key support announcements that all WebCenter professionals and solution owners need to know. Don’t miss out on getting the latest information.  There will be two live sessions with Q&A at the end of each session.   Register for Session 1 -  Nov 27th at 9am San Francisco, 12pm New York, and 5pm London Register for Session 2 – Nov 28th at 9am Singapore, 11am Sydney, and 6pm (Nov 27th) San Francisco

    Read the article

  • Software Architecture

    - by Roger
    I have a question about Software Architecture, anyone can help me or give me some hints currently, I have a J2EE project which deploys in a server, I should a Java Standard project(J2SE) should run 24 hours x 7 days to monitor something it could not run separately, because the Java Project shared the some same classes such as Java Bean classes with the J2EE project maybe my design is not correct, can anyone suggest me what should I do? Using SOA? is this correct? my current solution is run this java project using a bash, but I dont think it is then best idea. I list my class packages com.company.alteck com.company.altronics com.company.gamming com.company.jaycar com.company.jup com.company.rpg com.company.sansai com.company.wiretech com.company.yatsal com.ebay.api com.ebay.bean com.ebay.credential com.ozsstock.finals com.ozstock.adapter com.ozstock.aspectj com.ozstock.model com.ozstock.persistence com.ozstock.service com.ozstock.suppliers my structure likes this, all the packages contains "company" should run separately, but depends on the model bean class. can anyone give me some hints to redesign?

    Read the article

  • Azure website that talks to third party services

    - by Andy Frank
    I have website that crawls data from many third party services when user browse to webpage. This can be really slow because I hit third party server and process returned data before showing it to user. I am hosting website on Azure (shared mode). I am thinking to improve my implementation. Here is what I am thinking... Run a service that crawls data from third party services, process it and then store it in database. when user browse to my site, my site pulls data from database and display them to user. But above solution is not clear to me. Should I have normal service or wcf service? If wcf service then should website talk to database or wcf service (that can access data from database)? If normal service then how can I deploy on Azure?

    Read the article

  • Upgrade from ubuntu 9.10 to 11.10

    - by Chinnu
    Our project definition is to develop CUDA programs. Our workstation has CUDA 3.1 installed in Ubuntu 9.10. We need to program in CUDA 5.0 which can be installed only on ubuntu 11.10 or 12.04. We tried upgrading but were faced with many problems as 9.10 is no longer supported. So we chose to proceed with a clean installation. Since we have a shared workstation, we need to back up the settings. We decided to use clonezilla for cloning the system. Booting from the LiveCD showed an unexpected error. Another option was to install 11.10 in an external HDD by partitioning it, but Gparted could not be installed and terminated with the error "installArchives() failed" which we couldn't solve even after modifying the sources.list. We are stuck either ways. Have no idea how to proceed and we have a deadline to submit our CUDA program. Any suggestion is welcome.

    Read the article

  • Hands-on GlassFish FREE Course covering Deployment, Class Loading, Clustering, etc.

    - by arungupta
    René van Wijk, an Oracle ACE Director and a prolific blogger at middlewaremagic.com has shared contents of a FREE hands-on course on GlassFish. The course provides an introduction to GlassFish internals, JVM tuning, Deployment, Class Loading, Security, Resource Configuration, and Clustering. The self-paced hands-on instructions guide through the process of installing, configuring, deploying, tuning and other aspects of application development and deployment on GlassFish. The complete course material is available here. This course can also be taken as a paid instructor-led course. The attendees will get their own VM and will have plenty of time for Q&A and discussions. Register for this paid course. Oracle Education also offers a similar paid course on Oracle GlassFish Server 3.1: Administration and Deployment.

    Read the article

  • How can I upgrade from Ubuntu 9.10 to 11.10?

    - by Chinnu
    We need to program in CUDA 5.0 which can be installed only on ubuntu 11.10 or 12.04. Our current version, 9.10, is no longer supported, so we chose to proceed with a clean installation. Since we have a shared workstation, we used clonezilla for cloning the system. However, booting from the LiveCD showed an unexpected error. We also tried to install 11.10 in an external HDD by partitioning it, but Gparted could not be installed, and terminated with the error "installArchives() failed" which we couldn't solve even after modifying the sources.list. Is there a way to proceed with this upgrade?

    Read the article

  • Data Source Security Part 4

    - by Steve Felts
    So far, I have covered Client Identity and Oracle Proxy Session features, with WLS or database credentials.  This article will cover one more feature, Identify-based pooling.  Then, there is one more topic to cover - how these options play with transactions.Identity-based Connection Pooling An identity based pool creates a heterogeneous pool of connections.  This allows applications to use a JDBC connection with a specific DBMS credential by pooling physical connections with different DBMS credentials.  The DBMS credential is based on either the WebLogic user mapped to a database user or the database user directly, based on the “use database credentials” setting as described earlier. Using this feature enabled with “use database credentials” enabled seems to be what is proposed in the JDBC standard, basically a heterogeneous pool with users specified by getConnection(user, password). The allocation of connections is more complex if Enable Identity Based Connection Pooling attribute is enabled on the data source.  When an application requests a database connection, the WebLogic Server instance selects an existing physical connection or creates a new physical connection with requested DBMS identity. The following section provides information on how heterogeneous connections are created:1. At connection pool initialization, the physical JDBC connections based on the configured or default “initial capacity” are created with the configured default DBMS credential of the data source.2. An application tries to get a connection from a data source.3a. If “use database credentials” is not enabled, the user specified in getConnection is mapped to a DBMS credential, as described earlier.  If the credential map doesn’t have a matching user, the default DBMS credential is used from the datasource descriptor.3b. If “use database credentials” is enabled, the user and password specified in getConnection are used directly.4. The connection pool is searched for a connection with a matching DBMS credential.5. If a match is found, the connection is reserved and returned to the application.6. If no match is found, a connection is created or reused based on the maximum capacity of the pool: - If the maximum capacity has not been reached, a new connection is created with the DBMS credential, reserved, and returned to the application.- If the pool has reached maximum capacity, based on the least recently used (LRU) algorithm, a physical connection is selected from the pool and destroyed. A new connection is created with the DBMS credential, reserved, and returned to the application. It should be clear that finding a matching connection is more expensive than a homogeneous pool.  Destroying a connection and getting a new one is very expensive.  If you can use a normal homogeneous pool or one of the light-weight options (client identity or an Oracle proxy connection), those should be used instead of identity based pooling. Regardless of how physical connections are created, each physical connection in the pool has its own DBMS credential information maintained by the pool. Once a physical connection is reserved by the pool, it does not change its DBMS credential even if the current thread changes its WebLogic user credential and continues to use the same connection. To configure this feature, select Enable Identity Based Connection Pooling.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/EnableIdentityBasedConnectionPooling.html  "Enable identity-based connection pooling for a JDBC data source" in Oracle WebLogic Server Administration Console Help. You must make the following changes to use Logging Last Resource (LLR) transaction optimization with Identity-based Pooling to get around the problem that multiple users will be accessing the associated transaction table.- You must configure a custom schema for LLR using a fully qualified LLR table name. All LLR connections will then use the named schema rather than the default schema when accessing the LLR transaction table.  - Use database specific administration tools to grant permission to access the named LLR table to all users that could access this table via a global transaction. By default, the LLR table is created during boot by the user configured for the connection in the data source. In most cases, the database will only allow access to this user and not allow access to mapped users. Connections within Transactions Now that we have covered the behavior of all of these various options, it’s time to discuss the exception to all of the rules.  When you get a connection within a transaction, it is associated with the transaction context on a particular WLS instance. When getting a connection with a data source configured with non-XA LLR or 1PC (using the JTS driver) with global transactions, the first connection obtained within the transaction is returned on subsequent connection requests regardless of the values of username/password specified and independent of the associated proxy user session, if any. The connection must be shared among all users of the connection when using LLR or 1PC. For XA data sources, the first connection obtained within the global transaction is returned on subsequent connection requests within the application server, regardless of the values of username/password specified and independent of the associated proxy user session, if any.  The connection must be shared among all users of the connection within a global transaction within the application server/JVM.

    Read the article

  • Forked a project, where do my version numbers start?

    - by TheLQ
    I have forked a project and have changed lots of it. This fork isn't just a small feature change here and a buried bug fix there, its a pretty substantial change. Only most of the core code is shared. I forked this project at v2.5.0. For a while I've started versioning my fork at v3.0 . However I'm not sure if this is the right way, mainly because when that project hits v3.0, things get confusing. But I don't want to start over at v1.0 or v0.1 because that implies infancy, instability, and non-refindness of a project. This isn't true, as most of the core code is very refined and stable. I'm really lost on what to do, so I ask here: Whats the standard way to deal with this kind of situation? Do most forks start over again, bump up version numbers, or do something else that I'm not aware of.

    Read the article

  • should singleton be life-time available or should it be destroyable?

    - by Manoj R
    Should the singleton be designed so that it can be created and destroyed at any time in program or should it be created so that it is available in life-time of program. Which one is best practice? What are the advantages and disadvantages of both? EDIT :- As per the link shared by Mat, the singleton should be static. But then what are the disadvantages of making it destroyable? One advantage is it memory can be saved when it is not useful.

    Read the article

  • Integrating Amazon EC2 in Java via NetBeans IDE

    - by Geertjan
    Next, having looked at Amazon Associates services and Amazon S3, let's take a look at Amazon EC2, the elastic compute cloud which provides remote computing services. I started by launching an instance of Ubuntu Server 14.04 on Amazon EC2, which looks a bit like this in the on-line AWS Management Console, though I whitened out most of the details: Now that I have at least one running instance available on Amazon EC2, it makes sense to use the services that are integrated into NetBeans IDE:  I created a new application with one class, named "AmazonEC2Demo". Then I dragged the "describeInstances" service that you see above, with the mouse, into the class. Then the IDE automatically created all the other files you see below, i.e., 4 Java classes and one properties file: In the properties file, register the access ID and secret keys. These are read by the other generated Java classes. Signing and authentication are done automatically by the code that is generated, i.e., there's nothing generic you need to do and you can immediately begin working on your domain-specific code. Finally, you're now able to rewrite the code in "AmazonEC2Demo" to connect to Amazon EC2 and obtain information about your running instance: public class AmazonEC2Demo { public static void main(String[] args) { String instanceId1 = "i-something"; RestResponse result; try { result = AmazonEC2Service.describeInstances(instanceId1); System.out.println(result.getDataAsString()); } catch (IOException ex) { Logger.getLogger(AmazonEC2Demo.class.getName()).log(Level.SEVERE, null, ex); } } } From the above, you'll receive a chunk of XML with data about the running instance, it's name, status, dates, etc. In other words, you're now ready to integrate Amazon EC2 features directly into the applications you're writing, without very much work to get started. Within about 5 minutes, you're working on your business logic, rather than on the generic code that anyone needs when integrating with Amazon EC2.

    Read the article

< Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >