Search Results

Search found 14354 results on 575 pages for 'existing records'.

Page 345/575 | < Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >

  • Turn-based JRPG battle system architecture resources

    - by BenoitRen
    The past months I've been busy programming a 2D JRPG (Japanese-style RPG) in C++ using the SDL library. The exploration mode is more or less done. Now I'm tackling the battle mode. I have been unable to find any resources about how a classic turn-based JRPG battle system is structured. All I find are discussions about damage formula. I've tried googling, searching gamedev.net's message board, and crawling through C++-related questions here on Stack Exchange. I've also tried reading source code of existing open source RPGs, but without a guide of some sort it's like trying to find a needle in a haystack. I'm not looking for a set of rules like D&D or anything similar. I'm talking purely about code and object structure design. A battle system asks the player for input using menus. Next the battle turn is executed as the heroes and the enemies execute their actions. Can anyone point me in the right direction? Thanks in advance.

    Read the article

  • Real world pitfalls of introducing F# into a large codebase and engineering team

    - by nganju
    I'm CTO of a software firm with a large existing codebase (all C#) and a sizable engineering team. I can see how certain parts of the code would be far easier to write in F#, resulting in faster development time, fewer bugs, easier parallel implementations, etc., basically overall productivity gains for my team. However, I can also see several productivity pitfalls of introducing F#, namely: 1) Everyone has to learn F#, and it's not as trivial as switching from, say, Java to C#. Team members that have not learned F# will be unable to work on F# parts of the codebase. 2) The pool of hireable F# programmers, as of now (Dec 2010) is non-existent. Search various software engineer resume databases for "F#", way less than 1% of resumes contain the keyword. 3) Community support as of now (Dec 2010) is less available. You can google almost any problem in C# and find someone that has already dealt with it, not so with F#. Third party tool support (NUnit, Resharper etc) is also sketchy. I realize that this is a bit Catch-22, i.e. if people like me don't use F# then the community and tools will never materialize, etc. But, I've got a company to run, and I can be cutting edge but not bleeding edge. Any other pitfalls I'm not considering? Or anyone care to rebut the pitfalls I've mentioned? I think this is an important discussion and would love to hear your counter-arguments in this public forum that may do a lot to increase F# adoption by industry. Thanks.

    Read the article

  • Is there a search engine that indexes source code of a web-page?

    - by Dexter
    I need to search the web for sites that are in our industry that use the same Adwords management company, to ensure that the said company is not violating our contract, as they have been accused of doing. They use a tracking code in the template of every page which has a certain domain in the URL, and I'm wondering if it's possible "Google" the source code using some bot that crawls the code rather than the content? For example, I bought an unlimited license for an image gallery, and I was asked to type the license number in a comment just before the script. I thought it was just so a human could look at the source and find out if someone paid, but it turned out that it was actually that they had a crawler looking for their source code and that comment. If it ran across the code on your site, it would look for the comment, and if it found one, it would check to see if it was an existing one. If not, it would first notify you of your noncompliance, and then notify the owner of the script. Edit: I'm looking to index HTML and JavaScript only, not the server-side languages or Java.

    Read the article

  • Graduate expectations versus reality

    - by Bobby Tables
    When choosing what we want to study, and do with our careers and lives, we all have some expectations of what it is going to be like. Now that I've been in the industry for almost a decade, I've been reflecting a bit on what I thought (back when I was studying Computer Science) programming working life was going to be like, and how it's actually turning out to be. My two biggest shocks (or should I say, broken expectations) by far are the sheer amount of maintenance work involved in software, and the overall lack of professionalism: Maintenance: At uni, we were all told that the majority of software work is maintenance of existing systems. So I knew to expect this in the abstract. But I never imagined exactly how overwhelming this would turn out to be. Perhaps it's something I mentally glazed over, and hoped I'd be building cool new stuff from scratch a lot more. But it really is the case that most jobs are overwhelmingly maintenance, bug fixing, and support oriented. Lack of professionalism: At uni, I always had the impression that commercial software work is very process-oriented and stringently engineered. I had images of ISO processes, reams of technical documentation, every feature and bug being strictly documented, and a generally professional environment. It came as a huge shock to realise that most software companies operate no differently to a team of students working on a large semester-long project. And I've worked in both the small agile hack shop, and the medium sized corporate enterprise. While I wouldn't say that it's always been outright "unprofessional", it definitely feels like the software industry (on the whole) is far from the strong engineering discipline that I expected it to be. Has anyone else had similar experiences to this? What are the ways in which your expectations of what our profession would be like were different to the reality?

    Read the article

  • How can I find unused/unapplied CSS rules in a stylesheet?

    - by liori
    Hello, I've got a huge CSS file and an HTML file. I'd like to find out which rules are not used while displaying a HTML file. Are there tools for this? The CSS file has evolved over few years and from what I know no one has ever removed anything from it--people just wrote new overriding rules again and again. EDIT: It was suggested to use Dust-Me Selectors or Chrome's Web Page Performance tool. But they both work on level of selectors, and not individual rules. I've got lots of cases where a rule inside a selector is always overridden--and this is what I mostly want to get rid of. For example: body { color: white; padding: 10em; } h1 { color: black; } p { color: black; } ... ul { color: black; } All the text in my HTML is inside some wrapper element, so it is never white. body's padding always works, so of course the whole body selector cannot be removed. And I'd like to get rid of such useless rules too. EDIT: And another case of useless rule: when it duplicates existing one without changing anything: a { margin-left: 5px; color: blue; } a:hover { margin-left: 5px; color: red; } I'd happily get rid of the second margin-left... again it seems to me that those tools does not find such things. Thank you,

    Read the article

  • An Oracle Event for Your Facility & Equipment Maintenance Staff

    - by Mark Rosenberg
    The 7th Annual Oracle Maintenance Summit will occur February 4 – 6, 2013 at the Hyatt Regency San Francisco. This year, the Maintenance Summit will be one of the major pillars of a larger Oracle Value Chain Summit. What makes this event different from the other events hosted by Oracle and the PeopleSoft Community’s various user groups is that it is specifically meant to provide a venue for the facility and equipment maintenance community to talk about all things related to maintenance.  Maintenance Planners, Maintenance Schedulers, Vice Presidents and Directors of Physical Plant, Operations Managers, Craft Supervisors, IT management, and IT analysts typically attend this event and find it to be a very valuable experience. The Maintenance pillar will provide the same atmosphere and opportunity to hear from PeopleSoft Maintenance Management customers, Oracle Product Strategy, and partners, as in past years.  For more information, you can access the registration website for the Value Chain Summit. For existing PeopleSoft Maintenance Management customers…if you are interested in participating in the PeopleSoft Maintenance Management Focus Group in which Oracle discusses product roadmap topics with the community of customers who have licensed the PeopleSoft Maintenance Management application, please contact [email protected], [email protected], or [email protected]. The Focus Group will meet on February 7th, and attendance is by invitation only.We look forward to seeing you in San Francisco! P.S.  The Early Bird registration fee is $195. Register before December 31 to take advantage of this introductory low price, as the registration fee will go up to $295 after that date.

    Read the article

  • How to create an extensible rope in Box2D?

    - by Thomas
    Let's say I'm trying to create a ninja lowering himself down a rope, or pulling himself back up, all whilst he might be swinging from side to side or hit by objects. Basically like http://ninja.frozenfractal.com/ but with Box2D instead of hacky JavaScript. Ideally I would like to use a rope joint in Box2D that allows me to change the length after construction. The standard Box2D RopeJoint doesn't offer that functionality. I've considered a PulleyJoint, connecting the other end of the "pulley" to an invisible kinematic body that I can control to change the length, but PulleyJoint is more like a rod than a rope: it constrains maximum length, but unlike RopeJoint it constrains the minimum as well. Re-creating a RopeJoint every frame using a new length is rather inefficient, and I'm not even sure it would work properly in the simulation. I could create a "chain" of bodies connected by RotationJoints but that is also less efficient, and less robust. I also wouldn't be able to change the length arbitrarily, but only by adding and removing a whole number of links, and it's not obvious how I would connect the remainder without violating existing joints. This sounds like something that should be straightforward to do. Am I overlooking something?

    Read the article

  • A new mission statement for my school's algorithms class

    - by Eric Fode
    The teacher at Eastern Washington University that is now teaching the algorithms course is new to eastern and as a result the course has changed drastically mostly in the right direction. That being said I feel that the class could use a more specific, and industry oriented (since that is where most students will go, though suggestions for an academia oriented class are also welcome) direction, having only worked in industry for 2 years I would like the community's (a wider and much more collectively experienced and in the end plausibly more credible) opinion on the quality of this as a statement for the purpose an algorithms class, and if I am completely off target your suggestion for the purpose of a required Jr. level Algorithms class that is standalone (so no other classes focusing specifically on algorithms are required). The statement is as follows: The purpose of the algorithms class is to do three things: Primarily, to teach how to learn, do basic analysis, and implement a given algorithm found outside of the class. Secondly, to teach the student how to model a problem in their mind so that they can find a an existing algorithm or have a direction to start the development of a new algorithm. Third, to overview a variety of algorithms that exist and to deeply understand and analyze one algorithm in each of the basic algorithmic design strategies: Divide and Conquer, Reduce and Conquer, Transform and Conquer, Greedy, Brute Force, Iterative Improvement and Dynamic Programming. The Question in short is: do you agree with this statement of the purpose of an algorithms course, so that it would be useful in the real world, if not what would you suggest?

    Read the article

  • Should a link validator report 302 redirects as broken links?

    - by Kevin Vermeer
    A while ago, sparkfun.com changed their URL structure from /commerce/product_info.php?products_id=9266 to /products/9266 This is nice, right? We don't need to know that it is (or was) a PHP page, and commerce, product_info, and products_id all tell us that we're looking at some products. The latter form seems like a great improvement. However, the change would have broken existing links. So, nicely, they stuck in 302 redirects. Visit http://www.sparkfun.com/commerce/product_info.php?products_id=9266 and your browser will issue GET /commerce/product_info.php?products_id=9266 HTTP/1.1 to which Sparkfun's servers reply HTTP/1.1 302 Found Location: http://www.sparkfun.com/products/9266 This 302 redirect is caught by Stack Exchange's link validator as a broken link. It's not broken it works just fine. Here, try it: http://www.sparkfun.com/commerce/product_info.php?products_id=9266 I understand that a 302 redirect is intended to be a temporary redirect, while a 301 should be used for permanent changes per RFC 2616. That said, Wikipedia and common practice use it as a redirect. Who is in error in this situation? Is this an error in Sparkfun's redirect implementation or in Stack Exchange's URL validator?

    Read the article

  • Incompatible group permissions in Linux - Is it a bug?

    - by Sachin
    I am on Ubuntu 11.04. I am creating another user and placing an existing user in the group of other user, hoping to write in the home directory of other user. # uname -a Linux vini 2.6.38-11-generic #50-Ubuntu SMP Mon Sep 12 21:18:14 UTC 2011 i686 athlon i386 GNU/Linux # whoami sachin # su root # useradd -m -U foo // create user foo # usermod -a -G foo sachin // add user `sachin' to group `foo' # chmod 770 /home/foo/ # exit # whoami sachin # cd /home/foo/ bash: cd: /home/foo/: Permission denied # groups sachin sachin : sachin foo This is totally weird. Though user sachin is in group foo, and group bits for /home/foo/ is set to rwx, sachin can't chdir to /home/foo/. I am not able to understand this. But, if at the exit step, I switch to sachin user from root, this is what happens: # uname -a Linux vini 2.6.38-11-generic #50-Ubuntu SMP Mon Sep 12 21:18:14 UTC 2011 i686 athlon i386 GNU/Linux # whoami sachin # su root # useradd -m -U foo // create user foo # usermod -a -G foo sachin // add user `sachin' to group `foo' # chmod 770 /home/foo/ # su sachin # whoami sachin # cd /home/foo/ # ls examples.desktop Now, whatever is happening here is totally incomprehensible. Does su sachin inherits some permissions from the root user at this step? Any explanations would be much appreciated.

    Read the article

  • Code structure for multiple applications with a common core

    - by Azrael Seraphin
    I want to create two applications that will have a lot of common functionality. Basically, one system is a more advanced version of the other system. Let's call them Simple and Advanced. The Advanced system will add to, extend, alter and sometimes replace the functionality of the Simple system. For instance, the Advanced system will add new classes, add properties and methods to existing Simple classes, change the behavior of classes, etc. Initially I was thinking that the Advanced classes simply inherited from the Simple classes but I can see the functionality diverging quite significantly as development progresses, even while maintaining a core base functionality. For instance, the Simple system might have a Project class with a Sponsor property whereas the Advanced system has a list of Project.Sponsors. It seems poor practice to inherit from a class and then hide, alter or throw away significant parts of its features. An alternative is just to run two separate code bases and copy the common code between them but that seems inefficient, archaic and fraught with peril. Surely we have moved beyond the days of "copy-and-paste inheritance". Another way to structure it would be to use partial classes and have three projects: Core which has the common functionality, Simple which extends the Core partial classes for the simple system, and Advanced which also extends the Core partial classes for the advanced system. Plus having three test projects as well for each system. This seems like a cleaner approach. What would be the best way to structure the solution/projects/code to create two versions of a similar system? Let's say I later want to create a third system called Extreme, largely based on the Advanced system. Do I then create an AdvancedCore project which both Advanced and Extreme extend using partial classes? Is there a better way to do this? If it matters, this is likely to be a C#/MVC system but I'd be happy to do this in any language/framework that is suitable.

    Read the article

  • Fixing Windows7 Bootmgr

    - by Ashfame
    I made my laptop Dell XPS 15z dual boot with Ubuntu last year, and something went wrong & BOOTMGR of my windows got fried. I couldn't fix it that time. And I kept using Ubuntu. I don't even remember whether I installed directly via a live usb or used wubi, sorry. I installed 11.10 at that point of time, but right now I am on 12.10. Today, I got to know about the Boot repair tool, so I was wondering with this tool may be I can figure out what's exactly wrong with my setup. This is my Boot info - http://paste.ubuntu.com/1343575/ If I select the Win7 entry on GRUB2, I get the error BOOTMGR is missing. Press Ctrl-Alt-Del. Now the thing is I have read numerous links on how this could be fixed, but I don't feel comfortable without knowing what I am doing. So unless I am sure what a certain tool would do, I would prefer fixing it by hand (manually editing files). So reading from my boot info file, can anyone explain it to me what's messed up wrong here and what could fix it? I certainly can't afford to have my ubuntu install unbootable right now, but looking into this issue is bothering me too much. Help appreciated! I have Win7 DVD & Ubuntu live USBs with me, I am just looking for a sure shot way of fixing Win7 without any harm to my existing Ubuntu install.

    Read the article

  • How can I discourage the use of Access?

    - by Greg Buehler
    Lets pretend that a very large company (revenue numbers with more than 8 figures) is looking to do a refresh on a software system, particularly the dashboard used by employees. This system was originally put together in the early 1990's to handle inventory tracking and storage across a variety of facilities (10+). Since this large company is now in the process of implementing some of these inventory processes with SAP they are in need of a major refresh. The existing system: Microsoft Access project performs dashboard duties Unique shipping/receiving configurations at different facilities require unique forms and queries within the Access project Uses 3rd party libraries referenced by Access to directly interface with at control system (read: motors, conveyors, and counters) Individual SQL Server 2000 instances (some traces of pre-update SQL Server 6.0 documents) at each facility The Issue: This system started as a home brewed inventory tracking scheme with a single internal sponsor who is still in charge of the technical direction. The original sponsor prescribing the desired deliverables that are being called for in the current RFP. The RFP describes a system based around a single Access project. Any suggestion that Access is ill suited for a project of this scope are shot down under the reasoning that "it works for the scope now". Are there any case studies, notices, or statements that can be used to disuade this potential customer from repeating their mistake? Does Microsoft make any statements directly about when it is highly recommended to ditch Access?

    Read the article

  • Does this BSD-like license achieve what I want it to?

    - by Joseph Szymborski
    I was wondering if this license is: self defeating just a clone of an existing, better established license practical any more "corporate-friendly" than the GPL too vague/open ended and finally, if there is a better license that achieves a similar effect? I wanted a license that would (in simple terms) be as flexible/simple as the "Simplified BSD" license (which is essentially the MIT license) allow anyone to make modifications as long as I'm attributed require that I get a notification that such a derived work exists require that I have access to the source code and be given license to use the code not oblige the author of the derivative work to have to release the source code to the general public not oblige the author of the derivative work to license the derivative work under a specific license Here is the proposed license, which is just the simplified BSD with a couple of additional clauses (all of which are bolded). Copyright (c) (year), (author) (email) All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. The copyright holder(s) must be notified of any redistributions of source code. The copyright holder(s) must be notified of any redistributions in binary form The copyright holder(s) must be granted access to the source code and/or the binary form of any redistribution upon the copyright holder's request. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

    Read the article

  • Is it correct to fix bugs without adding new features when releasing software for system testing?

    - by Pratik
    This question is to experienced testers or test leads. This is a scenario from a software project: Say the dev team have completed the first iteration of 10 features and released it to system testing. The test team has created test cases for these 10 features and estimated 5 days for testing. The dev team of course cannot sit idle for 5 days and they start creating 10 new features for next iteration. During this time the test team found defects and raised some bugs. The bugs are prioritised and some of them have to be fixed before next iteration. The catch is that they would not accept the new release with any new features or changes to existing features until all those bugs fixed. The test team says that's how can we guarantee a stable release for testing if we also introduce new features along with the bug fix. They also cannot do regression tests of all their test cases each iteration. Apparently this is proper testing process according to ISQTB. This means the dev team has to create a branch of code solely for bug fixing and another branch where they continue development. There is more merging overhead specially with refactoring and architectural changes. Can you agree if this is a common testing principle. Is the test team's concern valid. Have you encountered this in practice in your project.

    Read the article

  • ACT On Marketing Campaign “Middleware Consolidation and Innovation Program”

    - by JuergenKress
     You want marketing budget to run joint Oracle Fusion Middleware 12 c events? Participate in the OFM ACTon Campaign. The opportunity for you as a partners is to : Create larger deals by reselling software and systems e.g. WebLogic on ODA, SOA on ODA, Exalogic for AppAdvantage Create more service revenue at our existing customer, by consolidation and migration of application servers platforms. Extend and innovate platforms e.g. mobile integration big data or business process automation Create service business at new customers, more than 120.000 customers use middleware today! The objective of the initiative is to run joint events for our middleware customers and Generate re-sell middleware license revenue in the broad market Generate Service revenue for partners Prepare partners to understand upgrade and upsell opportunities to Oracle Fusion Middleware 12c WebLogic Community Workspace (WebLogic Community membership required) you can learn details about the campaign: OFM ACTon event Brief & Middleware Consolidation and Innovation_Act-On Program_Salesplay & Campaign kit DRAFT. Interested and want to participate? Contact your local Value Added Distributer and he will work with you on a joint campaign plan! WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: marketing,ACton,Campaign,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Meet the New Windows Azure

    - by ScottGu
    Today we are releasing a major set of improvements to Windows Azure.  Below is a short-summary of just a few of them: New Admin Portal and Command Line Tools Today’s release comes with a new Windows Azure portal that will enable you to manage all features and services offered on Windows Azure in a seamless, integrated way.  It is very fast and fluid, supports filtering and sorting (making it much easier to use for large deployments), works on all browsers, and offers a lot of great new features – including built-in VM, Web site, Storage, and Cloud Service monitoring support. The new portal is built on top of a REST-based management API within Windows Azure – and everything you can do through the portal can also be programmed directly against this Web API. We are also today releasing command-line tools (which like the portal call the REST Management APIs) to make it even easier to script and automate your administration tasks.  We are offering both a Powershell (for Windows) and Bash (for Mac and Linux) set of tools to download.  Like our SDKs, the code for these tools is hosted on GitHub under an Apache 2 license. Virtual Machines Windows Azure now supports the ability to deploy and run durable VMs in the cloud.  You can easily create these VMs using a new Image Gallery built-into the new Windows Azure Portal, or alternatively upload and run your own custom-built VHD images. Virtual Machines are durable (meaning anything you install within them persists across reboots) and you can use any OS with them.  Our built-in image gallery includes both Windows Server images (including the new Windows Server 2012 RC) as well as Linux images (including Ubuntu, CentOS, and SUSE distributions).  Once you create a VM instance you can easily Terminal Server or SSH into it in order to configure and customize the VM however you want (and optionally capture your own image snapshot of it to use when creating new VM instances).  This provides you with the flexibility to run pretty much any workload within Windows Azure.   The new Windows Azure Portal provides a rich set of management features for Virtual Machines – including the ability to monitor and track resource utilization within them.  Our new Virtual Machine support also enables the ability to easily attach multiple data-disks to VMs (which you can then mount and format as drives).  You can optionally enable geo-replication support on these – which will cause Windows Azure to continuously replicate your storage to a secondary data-center at least 400 miles away from your primary data-center as a backup. We use the same VHD format that is supported with Windows virtualization today (and which we’ve released as an open spec), which enables you to easily migrate existing workloads you might already have virtualized into Windows Azure.  We also make it easy to download VHDs from Windows Azure, which also provides the flexibility to easily migrate cloud-based VM workloads to an on-premise environment.  All you need to do is download the VHD file and boot it up locally, no import/export steps required. Web Sites Windows Azure now supports the ability to quickly and easily deploy ASP.NET, Node.js and PHP web-sites to a highly scalable cloud environment that allows you to start small (and for free) and then scale up as your traffic grows.  You can create a new web site in Azure and have it ready to deploy to in under 10 seconds: The new Windows Azure Portal provides built-in administration support for Web sites – including the ability to monitor and track resource utilization in real-time: You can deploy to web-sites in seconds using FTP, Git, TFS and Web Deploy.  We are also releasing tooling updates today for both Visual Studio and Web Matrix that enable developers to seamlessly deploy ASP.NET applications to this new offering.  The VS and Web Matrix publishing support includes the ability to deploy SQL databases as part of web site deployment – as well as the ability to incrementally update database schema with a later deployment. You can integrate web application publishing with source control by selecting the “Set up TFS publishing” or “Set up Git publishing” links on a web-site’s dashboard: Doing do will enable integration with our new TFS online service (which enables a full TFS workflow – including elastic build and testing support), or create a Git repository that you can reference as a remote and push deployments to.  Once you push a deployment using TFS or Git, the deployments tab will keep track of the deployments you make, and enable you to select an older (or newer) deployment and quickly redeploy your site to that snapshot of the code.  This provides a very powerful DevOps workflow experience.   Windows Azure now allows you to deploy up to 10 web-sites into a free, shared/multi-tenant hosting environment (where a site you deploy will be one of multiple sites running on a shared set of server resources).  This provides an easy way to get started on projects at no cost. You can then optionally upgrade your sites to run in a “reserved mode” that isolates them so that you are the only customer within a virtual machine: And you can elastically scale the amount of resources your sites use – allowing you to increase your reserved instance capacity as your traffic scales: Windows Azure automatically handles load balancing traffic across VM instances, and you get the same, super fast, deployment options (FTP, Git, TFS and Web Deploy) regardless of how many reserved instances you use. With Windows Azure you pay for compute capacity on a per-hour basis – which allows you to scale up and down your resources to match only what you need. Cloud Services and Distributed Caching Windows Azure also supports the ability to build cloud services that support rich multi-tier architectures, automated application management, and scale to extremely large deployments.  Previously we referred to this capability as “hosted services” – with this week’s release we are now referring to this capability as “cloud services”.  We are also enabling a bunch of new features with them. Distributed Cache One of the really cool new features being enabled with cloud services is a new distributed cache capability that enables you to use and setup a low-latency, in-memory distributed cache within your applications.  This cache is isolated for use just by your applications, and does not have any throttling limits. This cache can dynamically grow and shrink elastically (without you have to redeploy your app or make code changes), and supports the full richness of the AppFabric Cache Server API (including regions, high availability, notifications, local cache and more).  In addition to supporting the AppFabric Cache Server API, it also now supports the Memcached protocol – allowing you to point code written against Memcached at it (no code changes required). The new distributed cache can be setup to run in one of two ways: 1) Using a co-located approach.  In this option you allocate a percentage of memory in your existing web and worker roles to be used by the cache, and then the cache joins the memory into one large distributed cache.  Any data put into the cache by one role instance can be accessed by other role instances in your application – regardless of whether the cached data is stored on it or another role.  The big benefit with the “co-located” option is that it is free (you don’t have to pay anything to enable it) and it allows you to use what might have been otherwise unused memory within your application VMs. 2) Alternatively, you can add “cache worker roles” to your cloud service that are used solely for caching.  These will also be joined into one large distributed cache ring that other roles within your application can access.  You can use these roles to cache 10s or 100s of GBs of data in-memory very effectively – and the cache can be elastically increased or decreased at runtime within your application: New SDKs and Tooling Support We have updated all of the Windows Azure SDKs with today’s release to include new features and capabilities.  Our SDKs are now available for multiple languages, and all of the source in them is published under an Apache 2 license and and maintained in GitHub repositories. The .NET SDK for Azure has in particular seen a bunch of great improvements with today’s release, and now includes tooling support for both VS 2010 and the VS 2012 RC. We are also now shipping Windows, Mac and Linux SDK downloads for languages that are offered on all of these systems – allowing developers to develop Windows Azure applications using any development operating system. Much, Much More The above is just a short list of some of the improvements that are shipping in either preview or final form today – there is a LOT more in today’s release.  These include new Virtual Private Networking capabilities, new Service Bus runtime and tooling support, the public preview of the new Azure Media Services, new Data Centers, significantly upgraded network and storage hardware, SQL Reporting Services, new Identity features, support within 40+ new countries and territories, and much, much more. You can learn more about Windows Azure and sign-up to try it for free at http://windowsazure.com.  You can also watch a live keynote I’m giving at 1pm June 7th (later today) where I’ll walk through all of the new features.  We will be opening up the new features I discussed above for public usage a few hours after the keynote concludes.  We are really excited to see the great applications you build with them. Hope this helps, Scott

    Read the article

  • Why would accessing photos over a network be a problem for Digikam?

    - by Shedeki
    Digikam has always worked nicely for me. I recently setup a Synology DiskStation (DS212+) and moved all my pictures to it, keeping them in an encrypted folder. I mount that folder using cifs, as some bug prevents eCryptfs and NFS from working together. This has led Digikam to being incredibly slow. Startup takes a very long time (several minutes for 41779 items, 123.8GB) but worse is how long it takes Digikam to write files. I like using Digikams import feature to copy new images from my camera to the hard drive because it checks for duplicates as well as creating a clear folder structure according to the dates the images were taken. Since I moved to using the network drive Digikam takes about 5 to 10 times as long to import photos than it did before. Saving modified or converted images takes equally long. What I am looking for is a way to help Digikam speed things up or an alternative piece of software (I have never liked Digikam being so very much KDEish…). There are just so many features that only Digikam seems to combine, e.g.: Batch processing. Respects existing folder structure. Does not mess up files for other applications. *.NEF support. Caches thumbnails in a clean way.

    Read the article

  • Virtually the fastest way to try Solaris 11 (and Solaris 10 zones)

    - by dminer
    If you're looking to try out Solaris 11, there are the standard ISO and USB image downloads on the main page.  Those are great if you're looking to install Solaris 11 on hardware, and we hope you will.  But if you take the time to look down the page, you'll find a link off to the Oracle Solaris 11 Virtual Machine downloads.  There are two downloads there:A pre-built Solaris 10 zoneA pre-built Solaris 11 VM for use with VirtualBoxIf you're looking to try Solaris 11 on x86, the second one is what you want.  Of course, this assumes you have VirtualBox already (and if you don't, now's the time to try it, it's a terrific free desktop virtualization product).  Once you complete the 1.8 GB download, it's a simple matter of unzipping the archive and a few quick clicks in VirtualBox to get a Solaris 11 desktop booted.  While it's booting, you'll get to run through the new system configuration tool (that'll be the subject of a future posting here) to configure networking, a user account, and so on.So what about that pre-built Solaris 10 zone download?  It's a really simple way to get yourself acquainted with the Solaris 10 zones feature, which you may well find indispensible in transitioning an existing Solaris 10 infrastructure to Solaris 11.  Once you've downloaded the file, it's a self-extracting executable that'll configure the zone for you, all you have to supply is an IP address for the zone.  It's really quite slick!I expect we'll do a lot more pre-built VM's and zones going forward, as that's a big part of being a cloud OS; if there's one that would be really useful for you, let us know.

    Read the article

  • Making LISPs manageable

    - by Andrea
    I am trying to learn Clojure, which seems a good candidate for a successful LISP. I have no problem with the concepts, but now I would like to start actually doing something. Here it comes my problem. As I mainly do web stuff, I have been looking into existing frameworks, database libraries, templating libraries and so on. Often these libraries are heavily based on macros. Now, I like very much the possibility of writing macros to get a simpler syntax than it would be possible otherwise. But it definitely adds another layer of complexity. Let me take an example of a migration in Lobos from a blog post: (defmigration add-posts-table (up [] (create clogdb (table :posts (integer :id :primary-key ) (varchar :title 250) (text :content ) (boolean :status (default false)) (timestamp :created (default (now))) (timestamp :published ) (integer :author [:refer :authors :id] :not-null)))) (down [] (drop (table :posts )))) It is very readable indeed. But it is hard to recognize what the structure is. What does the function timestamp return? Or is it a macro? Having all this freedom of writing my own syntax means that I have to learn other people's syntax for every library I want to use. How can I learn to use these components effectively? Am I supposed to learn each small DSL as a black box?

    Read the article

  • Can no longer boot with rEFIt and Grub on early 2006 MacBook Pro

    - by Don Quixote
    I don't know what happened to cause this. I have Snow Leopard, Ubuntu 11.04 Natty Narwhal and Windows XP SP3 on my early 2006 MacBook Pro. It is a Core Duo unit, NOT Core 2 Duo, so it is 32-bit only - Model Identifier MacBookPro1,1. I use rEFIt 0.14 for my boot menu. For some reason neither XP nor Ubuntu would boot anymore. I'd just get a black screen with a rapidly flashing underscore in the top-left corner. Having both those OSes failing to boot suggested a problem with the boot loader in my MBR. The rEFIT partition tool verified that my MBR partitions were still synced with my GPT partitions, so I rewrote my MBR partition table with fdisk while booted from Parted Magic: # fdisk /dev/sda (fdisk warns about the disk having a GPT. I press on anyway.) p (Print the existing partition table to make sure it's OK.) w (Write the old partition table back to disk. This also writes a new MBR boot loader.) After this XP would boot but Ubuntu would not, with the same symptom. Now I used update-grub while chrooted into Ubuntu from Parted Magic: # mount /dev/sda3 /mnt # mount --bind /dev /mnt/dev # mount --bind /sys /mnt/sys # mount --bind /proc /mnt/proc # chroot /mnt Chroot issues some warnings about not being able to identify some group IDs. I don't know why that happens, or whether it is a problem. At this point while I am still booted off of Parted Magic's kernel, I am running from Natty's filesystem. # update-grub Update-grub detects each of my operating systems then claims to complete successfully, but still won't boot. I asked this same question over at rEFIt's Sourceforge support forum but there have been no replies yet. I also Googled quite a bit, and see many who have the same black screen problem, but none of their situations seem quite like mine. Thanks for any help you can give me. -- Don Quixote

    Read the article

  • Problem connecting to isp server using xl2tpd as client. Ubuntu server 13.04

    - by Deon Pretorius
    I have followed guides found on google and ubuntu support pages and can get xl2tpd connection up but only under the following conditions: 1 - ADSL model must be configured and connected to the ISP or 2 - ADSL modem in bridge mode I must have an existing PPPoe connection established. If neither of the above are active xl2tpd wont trigger pppd and connect to the isp and thus tunnel connection fails to connect to the L2TP server of the ISP. Am I doing something wrong; /etc/ppp/options.l2tpd.axxess ipcp-accept-local ipcp-accept-remote refuse-eap refuse-chap require-pap noccp noauth idle 1800 mtu 1200 mru 1200 defaultroute usepeerdns debug lock connect-delay 5000 name (name used for ppp connection) /etc/ppp/pap-secrets # * password (name used for ppp connection as above) * (ppp password supplied by isp) /etc/xl2tpd/xl2tpd.conf [global] ; Global parameters: auth file = /etc/xl2tpd/l2tp-secrets ; * Where our challenge secrets are access control = yes ; * Refuse connections without IP match debug tunnel = yes [lac axxess] lns = 196.30.121.50 ; * Who is our LNS? redial = yes ; * Redial if disconnected? redial timeout = 5 ; * Wait n seconds between redials max redials = 5 ; * Give up after n consecutive failures hidden bit = yes ; * User hidden AVP's? length bit = yes ; * Use length bit in payload? require pap = yes ; * Require PAP auth. by peer require chap = no ; * Require CHAP auth. by peer refuse chap = yes ; * Refuse CHAP authentication require authentication = yes ; * Require peer to authenticate name = BLA85003@axxess ; * Report this as our hostname ppp debug = yes ; * Turn on PPP debugging pppoptfile = /etc/ppp/options.l2tpd.axxess ; * ppp options file for this lac /etc/xl2tpd/l2tp-secrets # Secrets for authenticating l2tp tunnels # us them secret # * marko blah2 # zeus marko blah # * * interop * vzb_l2tp (*** secret supplied by isp) ^ isp server host name Any help will be greatly appreciated

    Read the article

  • ArchBeat Link-o-Rama for 101/10/2011

    - by Bob Rhubart
    All day, all architecture. Oracle Technology Network Architect Day - Phoenix, AZ - Dec 14. Free registration. Spend the day with your peers learning from Oracle experts in Cloud Computing, Engineered Systems, Oracle WebLogic, Oracle Coherence, Application-Driven Virtualization, and more. Registration is free, but seating is limited. Register now! Data Integration - Bad data is really the monster | Bikram Sinha "Bad data can cause huge operational failure and cost millions of dollars in terms of time and resources to clean up and validate data across multiple participating systems," says Bikram Sinha. Changing a navigation model on a page in WebCenter | Edwin Biemond Another illustrated how-to from Oracle ACE Edwin Biemond. Why do I need an Authenticator when I have an Identity Asserter? | Chris Johnson Chris Johnson responds to a user question. OOW: The Most Important Thing | Floyd Teter Oracle ACE Director Floyd Teter explains why he sees "the inclusion of Fusion Applications CRM and HCM in the Oracle Public Cloud" as the most important news to come out of Oracle OpenWorld 2011. Oracle Releases Oracle Solaris 11 | Gokhan Atil Atil offers an overview of some of the "key points" of the new Solaris 11 release. SOA Development Virtual Developer Day (On Demand) You won't get the hands-on experience available in the live event, but if you will learn learn how a SOA approach can be implemented, whether starting afresh with new services or reusing existing services. Webcast: Maximum Availability on Private Clouds - Nov 10 - 10am PT/ 1pm ET Featuring Margaret Hamburger (Director, Product Marketing, Oracle) and Joe Meeks (Director, Product Management, Oracle). Should Enterprise Architecture Teams Be More Focused on Innovation? | Richard Seroter Richard Seroter looks answers among opinions offered by Forrester analyst Brian Hopkins and Jude Umeh of CapGemini.

    Read the article

  • What is the best way to track / record the current programming project you work on? [duplicate]

    - by user2424160
    This question already has an answer here: Methodology for Documenting Existing Code Base 6 answers When do you start documenting the code? 13 answers Where should a programmer explain the extended logic behind the code? 5 answers I have been in this problem for long time and I want to know how it's done in real / big companies project? Suppose I have the project to build a website. Now I divide the project into sub tasks and do it. But you know that suppose I have task1 in hand like export the page to pdf. Now I spend 3 days to do that , came across various problems, many Stack Overflow questions and in the end I solve it. Now 4 months after someone told me that there is some error in the code. Now by that I completely forgot about (60%) of how I did it and why I do this way. I document the code but I can't write the whole story of that in the code. Then I have to spend much time on code to find what was the problem so that I added this line etc. I want to know that is there any way that i can log steps in completing the project. So that I can see how I end up with code, what errors I got, what questions I asked on SO and etc. How people do it in real time? Which software to use? I know in our project management software called JIRA we have tasks but that does not cover what steps I took to solve that tasks. What is the best way so that when I look back at my 2 year old project, I know how I solve particular task?

    Read the article

  • Is an app that does nothing but link to a web site functional enough to meet Apple's iOS guidelines?

    - by Pointy
    I don't hang out on Programmers enough to know whether this question is "ok", so my apologies if not. I tried to make the title obvious so at least it can be closed quickly :-) The question is simple. My employer wants "home screen presence" (or at least the possibility thereof) on iOS devices (also Android but I'm mostly interested in Apple at the moment). Our actual application will be a pure web-delivered mobile-friendly application, so what we want on the homescreen is basically something that just acts as a link to bring up Safari (or Chrome now I guess; not important). I'm presuming that that's more-or-less possible; if not then that would be interesting too. I know that the Apple guidelines are such that low-functionality apps are generally rejected out of hand. There are a lot of existing apps that seem (to me) less functional than a link to something useful, but I'm not Apple of course. Because this seems like a not-too-weird situation, I'm hoping that somebody knows it's either definitely OK (maybe because there are many such apps) or definitely not OK. Note that I know about things like PhoneGap and I don't want that, at least not at the moment.

    Read the article

< Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >