Search Results

Search found 6605 results on 265 pages for 'complex networks'.

Page 138/265 | < Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >

  • What is the value in hiding the details through abstractions? Isn't there value in transparency?

    - by user606723
    Background I am not a big fan of abstraction. I will admit that one can benefit from adaptability, portability and re-usability of interfaces etc. There is real benefit there, and I don't wish to question that, so let's ignore it. There is the other major "benefit" of abstraction, which is to hide implementation logic and details from users of this abstraction. The argument is that you don't need to know the details, and that one should concentrate on their own logic at this point. Makes sense in theory. However, whenever I've been maintaining large enterprise applications, I always need to know more details. It becomes a huge hassle digging deeper and deeper into the abstraction at every turn just to find out exactly what something does; i.e. having to do "open declaration" about 12 times before finding the stored procedure used. This 'hide the details' mentality seems to just get in the way. I'm always wishing for more transparent interfaces and less abstraction. I can read high level source code and know what it does, but I'll never know how it does it, when how it does it, is what I really need to know. What's going on here? Has every system I've ever worked on just been badly designed (from this perspective at least)? My philosophy When I develop software, I feel like I try to follow a philosophy I feel is closely related to the ArchLinux philosophy: Arch Linux retains the inherent complexities of a GNU/Linux system, while keeping them well organized and transparent. Arch Linux developers and users believe that trying to hide the complexities of a system actually results in an even more complex system, and is therefore to be avoided. And therefore, I never try to hide complexity of my software behind abstraction layers. I try to abuse abstraction, not become a slave to it. Question at heart Is there real value in hiding the details? Aren't we sacrificing transparency? Isn't this transparency valuable?

    Read the article

  • SQL Azure Service Issues &ndash; 10.27.2012 (Restored Now)

    - by ToStringTheory
    Please note that if you have a Windows Azure website, or use SQL Azure, your site may be experiencing downtime currently.  Notice I just called in regarding one of my public facing internet sites, because the site was failing to load anything but its error page, I couldn’t connect to the database to inspect application error logs, and the Windows Azure Management portal won’t load the SQL Azure extension. After speaking to the representative, he also mentioned that they were also having some problems updating the Service Dashboard which shows service up/down time, and for now, they are posting messages at http://account.windowsazure.com.  Please note that this issue may only be effecting certain regions.  Last, I may have misheard the representative, but he said that the outage was being categorized as a level 8, and if I heard correctly, I think he said that level 8 was the worst level.  I can’t say for sure on this though, because the phone connection to their support number was bad – large amounts of white noise. Good Luck! Update It appears that this outage may also be effecting the following services: SQL Database, Service Bus, Datamarket, Windows Azure Marketplace, Shared Caching, Access Control 2.0, and SQL Reporting. The note on the account page says for the South Central US region, however, I believe the representative I spoke to also mentioned North Central. As I said before though, the connection was bad. Update 2 My site regained connectivity about an hour ago, and it appears that the service dashboard is back in operation with correct status and history. It does appear that I misheard on the phone regarding multiple regions, so chances are this only effected a percentage of the platform. All in all, if this WAS their worst level of a problem, they really got it fixed and back up pretty fast. All in all, I understand that it is inherent for a complex system such as Azure to have ups and downs, but at the end of the day, I am still happy to support Azure to its fullest!

    Read the article

  • Extreme Optimization Numerical Libraries for .NET – Part 1 of n

    - by JoshReuben
    While many of my colleagues are fascinated in constructing the ultimate ViewModel or ServiceBus, I feel that this kind of plumbing code is re-invented far too many times – at some point in the near future, it will be out of the box standard infra. How many times have you been to a customer site and built a different variation of the same kind of code frameworks? How many times can you abstract Prism or reliable and discoverable WCF communication? As the bar is raised for whats bundled with the framework and more tasks become declarative, automated and configurable, Information Systems will expose a higher level of abstraction, forcing software engineers to focus on more advanced computer science and algorithmic tasks. I've spent the better half of the past decade building skills in .NET and expanding my mathematical horizons by working through the Schaums guides. In this series I am going to examine how these skillsets come together in the implementation provided by ExtremeOptimization. Download the trial version here: http://www.extremeoptimization.com/downloads.aspx Overview The library implements a set of algorithms for: linear algebra, complex numbers, numerical integration and differentiation, solving equations, optimization, random numbers, regression, ANOVA, statistical distributions, hypothesis tests. EONumLib combines three libraries in one - organized in a consistent namespace hierarchy. Mathematics Library - Extreme.Mathematics namespace Vector and Matrix Library - Extreme.Mathematics.LinearAlgebra namespace Statistics Library - Extreme.Statistics namespace System Requirements -.NET framework 4.0  Mathematics Library The classes are organized into the following namespace hierarchy: Extreme.Mathematics – common data types, exception types, and delegates. Extreme.Mathematics.Calculus - numerical integration and differentiation of functions. Extreme.Mathematics.Curves - points, lines and curves, including polynomials and Chebyshev approximations. curve fitting and interpolation. Extreme.Mathematics.Generic - generic arithmetic & linear algebra. Extreme.Mathematics.EquationSolvers - root finding algorithms. Extreme.Mathematics.LinearAlgebra - vectors , matrices , matrix decompositions, solvers for simultaneous linear equations and least squares. Extreme.Mathematics.Optimization – multi-d function optimization + linear programming. Extreme.Mathematics.SignalProcessing - one and two-dimensional discrete Fourier transforms. Extreme.Mathematics.SpecialFunctions

    Read the article

  • Best practice with branching source code and application lifecycle

    - by Toni Frankola
    We are a small ISV shop and we usually ship a new version of our products every month. We use Subversion as our code repository and Visual Studio 2010 as our IDE. I am aware a lot of people are advocating Mercurial and other distributed source control systems but at this point I do not see how we could benefit from these, but I might be wrong. Our main problem is how to keep branches and main trunk in sync. Here is how we do things today: Release new version (automatically create a tag in Subversion) Continue working on the main trunk that will be released next month And the cycle repeats every month and works perfectly. The problem arises when an urgent service release needs to be released. We cannot release it from the main trunk (2) as it is under heavy development and it is not stable enough to be released urgently. In such case we do the following: Create a branch from the tag we created in step (1) Bug fix Test and release Push the change back to main trunk (if applicable) Our biggest problem is merging these two (branch with main). In most cases we cannot rely on automatic merging because e.g.: a lot of changes has been made to main trunk merging complex files (like Visual Studio XML files etc.) does not work very well another developer / team made changes you do not understand and you cannot just merge it So what you think is the best practice to keep these two different versions (branch and main) in sync. What do you do?

    Read the article

  • Bad style programming, am I pretending too much?

    - by Luca
    I realized to work in an office with a quite bad code base. The base library implemented in years and years is quite limited, and most of that code is, honestly, horrible. Projects developed in the office are very large. Fine. I could define me a "perfectionist" (but often I'm not), and I thought to refactor an application (really a portion), which need a new (complex) feature. But, today, I really realized that it's not possible to refactor that application modules with a reasonable time (say, 24/26 hours, respect the avaialable time for the task, which is 160 hours). I'm talking about (I am a bit ashamed to say) name collisions, large and frequent cut & paste code, horrible and misleading naming, makefiles without dependencies (!), application login is spread randomly across many different sources, dead code, variable aliasing, no assertion, no documentation, very long source files, bad/incomplete include file definition, (this is emblematic!) very frequent extern declaration of variables and functions, ... I'm sure to continue ... buffer overflows because sprintf, indentation (!), spacing, non existent const modifier usage. I would say that every source line was written quite randomly when needed, without keeping in mind some design (at least, the obvious one). (Am I in hell?) The problem arises when the application is developed by a colleague of mine. I felt very frustrated. So, I decided to expose the "situation" to my colleague; at the end, that was a bad idea. He is justified in saying that "the application was developed in haste, so it is natural that it is written vaguely; you are wasting time to think and implement an elegant implementation" .... I'm asking too much from my colleague to write readable code, which is managed and documented? I expect too much in not having to read thousands of lines of code to understand how a particular logic?

    Read the article

  • Oracle Tutor: Learn Tutor in the comfort of your own home or office

    - by emily.chorba(at)oracle.com
    The primary challenge for companies faced with documenting policies and procedures is to realize that they can do this documentation in-house, with existing resources, using Oracle Tutor. Procedure documentation is a critical success component for supporting corporate governance or other regulatory compliance initiatives and when implementing or upgrading to a new business application. There are over 1000 Oracle Tutor customers worldwide that have used Tutor to create, distribute, and maintain their business procedures. This is easily accomplished because of Tutor's: Ease of use by those who have to write procedures (Microsoft Word based authoring) Ease of company-wide implementation (complex document management activities are centralized) Ease of use by workers who have to follow the procedures (play script format)Ease of access by remote workers (web-enabled) Oracle University is offering Live Virtual Tutor classes! The class lasts four days, starts on Tuesday and finishes on Friday. This course is an introduction to the Oracle Tutor suite of products. It focuses on the Policy and Procedure writing feature set of the Tutor applications. Participants will learn about writing procedures and maintaining these particular process document types, all using the Tutor method. The next three classes are scheduled for: April 19 - 22 May 31 - June 3 July 5 - 8 You will learn to: Write procedures Create procedure Flowcharts Write support documents Create Impact Analysis Reports Create Role-base Employee Manuals Deploy online Employee Manuals on an Intranet Enjoy learning Tutor in your local environment. Start the sign up process from this link Learn More For more information about Tutor, visit Oracle.com or the Tutor Blog. Post your questions at the Tutor Forum. Emily Chorba Principle Product Manager Oracle Tutor & BPM

    Read the article

  • Implement service layer in MVC

    - by Dan H
    We have a defined service layer hosted in WCF. We are now building a website that will need to use the services functionality. The website is being written in ASP.NET MVC 4 and I'm trying to decide how to reference the WCF service from the MVC app. It's a large complex website and it will be changing on a weekly basis. My first reaction is to abstract out the service references (About 7 services on this one WCF host) and create a service ref facade library with which the website interacts. But, I don't know exactly how to use the service facade in MVC. I'm starting to think the Models will be responsible for it because when the controller gets a model, that model should call the service (if needed) and return what the controller asked. I'm trying to avoid having the MVC app know details of the service references. So, I could have a model factory that creates whatever model the controllers need and they can use the service facade to accomplish it. Is this a good plan, or am I off track?

    Read the article

  • Calculate the Intersection of Two Volumes

    - by igrad
    If you've ever played The Swapper, you'll have a good idea of what I'm asking about. I need to check for, and isolate, areas of a rectangle that may intersect with either a circle or another rectangle. These selected areas will receive special properties, and the areas will be non-static, since the intersecting shapes themselves will also be dynamic. My first thought was to use raycasting detection, though I've only seen that in use with circles, or even ellipses. I'm curious if there's a method of using raycasting with a more rectangular approach, or if there's a totally different method already in use to accomplish this task. I would like something more exact than checking in large chunks, and since I'm using SDL2 with a logical renderer size of 1920x1080, checking if each pixel is intersecting is out of the question, as it would slow things down past a playable speed. I already have a multi-shape collision function-template in place, and I could use that, though it only checks if sides or corners are intersecting; it does not compute the overlapping area, or even find the circle's secant line, though I can't imagine it would be overly complex to implement. TL;DR: I need to find and isolate areas of a rectangle that may intersect with a circle or another rectangle without checking every single pixel on-screen.

    Read the article

  • chromium-browser usus 99,99% IO disk

    - by lars
    My favorite browser: chromium is testing my patience. For some reason it sometimes uses 99,99% of I/O. (reading 2-3MB/s) Other processes (updatedb.mlocate, [kswapd0], clementine, compiz) show the same behavior. However this problem always starts and ends with chromium. To illustrate the impact on my system: when my disk starts to spin like crazy en the led burns continiously the system is so slow that it takes about two to five minuits to switch to tty6, log in and execute "killall chromiumbrowser && killall chromium" This is way faster than starting a new terminal in X, just starting a terminal seems to heavy for compiz under these circumstances. Waiting until its over takes more than 30 minuits, if it ends at all. The exact circumstances are difficult to replicate. Several tabs have to be open, usualy 8 or more. It seems that the chance to increases when more complex sites like gmail of plugins like flash are running. Opening several new tabs at omgubunt.co.uk has the best chance to replecate this isue. I have no idea where to start looking for a solution. Any help would be greatly apreciated ubuntu 12.10 | 2GB | 2x 1.66GHz Intel | 32bit | IBM Thinkpad R60e

    Read the article

  • Finding the Right Solution to Source and Manage Your Contractors

    - by mark.rosenberg(at)oracle.com
    Many of our PeopleSoft Enterprise applications customers operate in service-based industries, and all of our customers have at least some internal service units, such as IT, marketing, and facilities. Employing the services of contractors, often referred to as "contingent labor," to deliver either or both internal and external services is common practice. As we've transitioned from an industrial age to a knowledge age, talent has become a primary competitive advantage for most organizations. Contingent labor offers talent on flexible terms; it offers the ability to scale up operations, close skill gaps, and manage risk in the process of delivering services. Talent comes from many sources and the rise in the contingent worker (contractor, consultant, temporary, part time) has increased significantly in the past decade and is expected to reach 40 percent in the next decade. Managing the total pool of talent in a seamless integrated fashion not only saves organizations money and increases efficiency, but creates a better place for workers of all kinds to work. Although the term "contingent labor" is frequently used to describe both contractors and employees who have flexible schedules and relationships with an organization, the remainder of this discussion focuses on contractors. The term "contingent labor" is used interchangeably with "contractor." Recognizing the importance of contingent labor, our PeopleSoft customers often ask our team, "What Oracle vendor management system (VMS) applications should I evaluate for managing contractors?" In response, I thought it would be useful to describe and compare the three most common Oracle-based options available to our customers. They are:   The enterprise licensed software model in which you implement and utilize the PeopleSoft Services Procurement (sPro) application and potentially other PeopleSoft applications;  The software-as-a-service model in which you gain access to a derivative of PeopleSoft sPro from an Oracle Business Process Outsourcing Partner; and  The managed service provider (MSP) model in which staffing industry professionals utilize either your enterprise licensed software or the software-as-a-service application to administer your contingent labor program. At this point, you may be asking yourself, "Why three options?" The answer is that since there is no "one size fits all" in terms of talent, there is also no "one size fits all" for effectively sourcing and managing contingent workers. Various factors influence how an organization thinks about and relates to its contractors, and each of the three Oracle-based options addresses an organization's needs and preferences differently. For the purposes of this discussion, I will describe the options with respect to (A) pricing and software provisioning models; (B) control and flexibility; (C) level of engagement with contractors; and (D) approach to sourcing, employment law, and financial settlement. Option 1:  Enterprise Licensed Software In this model, you purchase from Oracle the license and support for the applications you need. Typically, you license PeopleSoft sPro as your VMS tool for sourcing, monitoring, and paying your contract labor. In conjunction with sPro, you can also utilize PeopleSoft Human Capital Management (HCM) applications (if you do not already) to configure more advanced business processes for recruiting, training, and tracking your contractors. Many customers choose this enterprise license software model because of the functionality and natural integration of the PeopleSoft applications and because the cost for the PeopleSoft software is explicit. There is no fee per transaction to source each contractor under this model. Our customers that employ contractors to augment their permanent staff on billable client engagements often find this model appealing because there are no fees to affect their profit margins. With this model, you decide whether to have your own IT organization run the software or have the software hosted and managed by either Oracle or another application services provider. Your organization, perhaps with the assistance of consultants, configures, deploys, and operates the software for managing your contingent workforce. This model offers you the highest level of control and flexibility since your organization can configure the contractor process flow exactly to your business and security requirements and can extend the functionality with PeopleTools. This option has proven very valuable and applicable to our customers engaged in government contracting because their contingent labor management practices are subject to complex standards and regulations. Customers find a great deal of value in the application functionality and configurability the enterprise licensed software offers for managing contingent labor. Some examples of that functionality are... The ability to create a tiered network of preferred suppliers including competencies, pricing agreements, and elaborate candidate management capabilities. Configurable alerts and online collaboration for bid, resource requisition, timesheet, and deliverable entry, routing, and approval for both resource and deliverable-based services. The ability to manage contractors with the same PeopleSoft HCM and Projects applications that are used to manage the permanent workforce. Because it allows you to utilize much of the same PeopleSoft HCM and Projects application functionality for contractors that you use for permanent employees, the enterprise licensed software model supports the deepest level of engagement with the contingent workforce. For example, you can: fill job openings with contingent labor; guide contingent workers through essential safety and compliance training with PeopleSoft Enterprise Learning Management; and source contingent workers directly to project-based assignments in PeopleSoft Resource Management and PeopleSoft Program Management. This option enables contingent workers to collaborate closely with your permanent staff on complex, knowledge-based efforts - R&D projects, billable client contracts, architecture and engineering projects spanning multiple years, and so on. With the enterprise licensed software model, your organization maintains responsibility for the sourcing, onboarding (including adherence to employment laws), and financial settlement processes. This means your organization maintains on staff or hires the expertise in these domains to utilize the software and interact with suppliers and contractors. Option 2:  Software as a Service (SaaS) The effort involved in setting up and operating VMS software to handle a contingent workforce leads many organizations to seek a system that can be activated and configured within a few days and for which they can pay based on usage. Oracle's Business Process Outsourcing partner, Provade, Inc., provides exactly this option to our customers. Provade offers its vendor management software as a service over the Internet and usually charges your organization a fee that is a percentage of your total contingent labor spending processed through the Provade software. (Percentage of spend is the predominant fee model, although not the only one.) In addition to lower implementation costs, the effort of configuring and maintaining the software is largely upon Provade, not your organization. This can be very appealing to IT organizations that are thinly stretched supporting other important information technology initiatives. Built upon PeopleSoft sPro, the Provade solution is tailored for simple and quick deployment and administration. Provade has added capabilities to clone users rapidly and has simplified business documents, like work orders and change orders, to facilitate enterprise-wide, self-service adoption with little to no training. Provade also leverages Oracle Business Intelligence Enterprise Edition (OBIEE) to provide integrated spend analytics and dashboards. Although pure customization is more limited than with the enterprise licensed software model, Provade offers a very effective option for organizations that are regularly on-boarding and off-boarding high volumes of contingent staff hired to perform discrete support tasks (for example, order fulfillment during the holiday season, hourly clerical work, desktop technology repairs, and so on) or project tasks. The software is very configurable and at the same time very intuitive to even the most computer-phobic users. The level of contingent worker engagement your organization can achieve with the Provade option is generally the same as with the enterprise licensed software model since Provade can automatically establish contingent labor resources in your PeopleSoft applications. Provade has pre-built integrations to Oracle's PeopleSoft and the Oracle E-Business Suite procurement, projects, payables, and HCM applications, so that you can evaluate, train, assign, and track contingent workers like your permanent employees. Similar to the enterprise licensed software model, your organization is responsible for the contingent worker sourcing, administration, and financial settlement processes. This means your organization needs to maintain the staff expertise in these domains. Option 3:  Managed Services Provider (MSP) Whether you are using the enterprise licensed model or the SaaS model, you may want to engage the services of sourcing, employment, payroll, and financial settlement professionals to administer your contingent workforce program. Firms that offer this expertise are often referred to as "MSPs," and they are typically staffing companies that also offer permanent and temporary hiring services. (In fact, many of the major MSPs are Oracle applications customers themselves, and they utilize the PeopleSoft Solution for the Staffing Industry to run their own business operations.) Usually, MSPs place their staff on-site at your facilities, and they can utilize either your enterprise licensed PeopleSoft sPro application or the Provade VMS SaaS software to administer the network of suppliers providing contingent workers. When you utilize an MSP, there is a separate fee for the MSP's service that is typically funded by the participating suppliers of the contingent labor. Also in this model, the suppliers of the contingent labor (not the MSP) usually pay the contingent labor force. With an MSP, you are intentionally turning over business process control for the advantages associated with having someone else manage the processes. The software option you choose will to a certain extent affect your process flexibility; however, the MSPs are often able to adapt their processes to the unique demands of your business. When you engage an MSP, you will want to give some thought to the level of engagement and "partnering" you need with your contingent workforce. Because the MSP acts as an intermediary, it can be very valuable in handling high volume, routine contracting for which there is a relatively low need for "partnering" with the contingent workforce. However, if your organization (or part of your organization) engages contingent workers for high-profile client projects that require diplomacy, intensive amounts of interaction, and personal trust, introducing an MSP into the process may prove less effective than handling the process with your own staff. In fact, in many organizations, it is common to enlist an MSP to handle contractors working on internal projects and to have permanent employees handle the contractor relationships that affect the portion of the services portfolio focused on customer-facing, billable projects. One of the key advantages of enlisting an MSP is that you do not have to maintain the expertise required for orchestrating the sourcing, hiring, and paying of contingent workers.  These are the domain of the MSPs. If your own staff members are not prepared to manage the essential "overhead" processes associated with contingent labor, working with an MSP can make solid business sense. Proper administration of a contingent workforce can make the difference between project success and failure, operating profit and loss, and legal compliance and fines. Concluding Thoughts There is little doubt that thoughtfully and purposefully constructing a service delivery strategy that leverages the strengths of contingent workers can lead to better projects, deliverables, and business results. What requires a bit more thinking is determining the platform (or platforms) that will enable each part of your organization to best deliver on its mission.

    Read the article

  • How far to go with Domain Driven Design?

    - by synti
    I've read a little about domain driven design and the usage of a rich domain model, as described by Martin Fowler, and I've decided to put it in practice in a personal project, instead of using transaction scripts. Everything went fine until UI implementation started. The thing is some views will use rich components that are backed up by unusual models and, thus, I must transform the domain model into what is used by those components. And that transformation is specially "complex" in the view-to-domain portion, up to the point that some business logic is involved. Wich brings me to the questioning: where should I do these adaptations? So far I've got the following conclusions: Doing it in the presentation layer is good because, well, if that layer imposes restrictions in it's model, then it should be the one to handle them. But it's bad because there'll be some business leakage. If I do it on the services objects (controllers, actions, whatever), then it'd be good because there won't be any change to the domain API just because of presentation layer, but it's bad because then I'd have transaction scripts, wich is not the intended design. Finally, if I do it on the domain model, there'd be no leakage of business logic at all. But in the future I could expect an explosion of the API into a series of methods designed just to handle that view-model <- domain-model adaptation. I hope I could make myself clear on this.

    Read the article

  • N64oid Brings N64 Emulation to Android Devices

    - by ETC
    Craving some Ocarina of Time adventures, Super Mario 64 antics, or some Star Fox 64 flying on your Android device? N64oid brings retro emulation of Nintendo’s popular N64 console to Android devices. N64oid is an N64 console emulator for Android devices. You’ll need a copy of the $5.99 emulator, ROMs (from the usual sources, unless you’ve got a ROM ripping setup in your basement and a stack of old cartridges), and a suitably speedy Android device. Older Android devices will find the playback choppy and subpar, but newer and speedier devices like the Nexus-One and Samsung Galaxy should have no problem handling the emulator. Like all emulators N64oid is a work in progress and emulating an entire closed-system console on a totally different set of hardware is never a perfect 1:1 emulation, but if you’re a die hard fan of classic N64 titles (check out this list of top ranked titles to inspire some nostalgia) N64oid is worth the price of a burger for sure. N64oid [Android Market via Download Squad] Latest Features How-To Geek ETC Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions Super-Charge GIMP’s Image Editing Capabilities with G’MIC [Cross-Platform] Access and Manage Your Ubuntu One Account in Chrome and Iron Mouse Over YouTube Previews YouTube Videos in Chrome Watch a Machine Get Upgraded from MS-DOS to Windows 7 [Video] Bring the Whole Ubuntu Gang Home to Your Desktop with this Mascots Wallpaper Hack Apart a Highlighter to Create UV-Reactive Flowers [Science]

    Read the article

  • How to Clean Up and Fix Your Music Library with the MusicBrainz Database

    - by Erez Zukerman
    Over the years, some of us accumulate lots and lots of music files. Since these come from a variety of sources, they’re not always as neat as they could be. If your music library is in a bit of a jumble with tags missing, oddly named files and incomplete albums, read on to see how easy it is to make it neat once and for all. MusicBrainz is an online database that uses audio “fingerprints” to identify music tracks even when they’re incorrectly labelled. We’ll be using this database through a free client called Picard, available for Windows, Mac OS X and Linux. So first thing, head on over to Picard’s download page and get the installer. If you use Linux, you can install Picard using your package manager. Once you finish going through the installer, run Picard. Your firewall might pop up an alert telling you Picard is trying to access the Internet; you should agree to let Picard through. You will now see the main Picard interface. Click View > File browser (or press Ctrl+B). Latest Features How-To Geek ETC Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions Awesome 10 Meter Curved Touchscreen at the University of Groningen [Video] TV Antenna Helper Makes HDTV Antenna Calibration a Snap Turn a Green Laser into a Microscope Projector [Science] The Open Road Awaits [Wallpaper] N64oid Brings N64 Emulation to Android Devices Super-Charge GIMP’s Image Editing Capabilities with G’MIC [Cross-Platform]

    Read the article

  • Logging library for (c++) games

    - by Klaim
    I know a lot of logging libraries but didn't test a lot of them. (GoogleLog, Pantheios, the coming boost::log library...) In games, especially in remote multiplayer and multithreaded games, logging is vital to debugging, even if you remove all logs in the end. Let's say I'm making a PC game (not console) that needs logs (multiplayer and multithreaded and/or multiprocess) and I have good reasons for looking for a library for logging (like, I don't have time or I'm not confident in my ability to write one correctly for my case). Assuming that I need : performance ease of use (allow streaming or formating or something like that) reliable (don't leak or crash!) cross-platform (at least Windows, MacOSX, Linux/Ubuntu) Wich logging library would you recommand? Currently, I think that boost::log is the most flexible one (you can even log to remotely!), but have not good performance update: is for high performance, but isn't released yet. Pantheios is often cited but I don't have comparison points on performance and usage. I've used my own lib for a long time but I know it don't manage multithreading so it's a big problem, even if it's fast enough. Google Log seems interesting, I just need to test it but if you already have compared those libs and more, your advice might be of good use. Games are often performance demanding while complex to debug so it would be good to know logging libraries that, in our specific case, have clear advantages.

    Read the article

  • Server side C# MVC with AngularJS

    - by Ryan Langton
    I like using .NET MVC and have used it quite a bit in the past. I have also built SPA's using AngularJS with no page loads other than the initial one. I think I want to use a blend of the two in many cases. Set up the application initially using .NET MVC and set up routing server side. Then set up each page as a mini-SPA. These pages can be quite complex with a lot of client side functionality. My confusion comes in how to handle the model-view binding using both .NET MVC and the AngularJS scope. Do I avoid the use of razor Html helpers? The Html helpers create the markup behind the scene so it could get messy trying to add angularjs tags using the helpers. So how to add ng-model for example to @Html.TextBoxFor(x = x.Name). Also ng-repeat wouldn't work because I'm not loading my model into the scope, right? I'd still want to use a c# foreach loop? Where do I draw the lines of separation? Has anyone gotten .NET MVC and AngularJS to play nicely together and what are your architectural suggestions to do so?

    Read the article

  • Implement Budget Allocation in DAX for Power Pivot and Tabular #powerpivot #tabular #ssas #dax

    - by Marco Russo (SQLBI)
    Comparing sales and budget, or costs and budget, is a very common operation. However, it is often the case that you have different granularities for different tables containing budget and the data to compare with. There are two ways to do that: you can limit the comparison to the granularity that is common to the two tables, or you can allocate the budget where it’s not defined. For example, if you have a budget defined by quarter and category, you might want to allocate it by month and product. In this way, you will do the comparison as you had a more granular definition of the budget, without actually having to do the manual job of allocating data (usually in an Excel worksheet!). If you want to do budget allocation in DAX, you can use the Budget Patterns we published on DAX Patterns. If you come from and MDX/OLAP background, at first you might find it hard to solve the problem of not having attribute hierarchies that helps you in propagating the budget values to lower hierarchical levels. However, I think that once you get used to DAX, you will find the behavior very predictable and easy to “debug” also for more complex allocation formula. You just have to be careful in writing the DAX formula, but probably the pattern we wrote should help you designing the right data model, without creating physical relationships to the budget table! This pattern is also based on the Handling Different Granularities scenario I discussed a couple of weeks ago.

    Read the article

  • Which algorithms/data structures should I "recognize" and know by name?

    - by Earlz
    I'd like to consider myself a fairly experienced programmer. I've been programming for over 5 years now. My weak point though is terminology. I'm self-taught, so while I know how to program, I don't know some of the more formal aspects of computer science. So, what are practical algorithms/data structures that I could recognize and know by name? Note, I'm not asking for a book recommendation about implementing algorithms. I don't care about implementing them, I just want to be able to recognize when an algorithm/data structure would be a good solution to a problem. I'm asking more for a list of algorithms/data structures that I should "recognize". For instance, I know the solution to a problem like this: You manage a set of lockers labeled 0-999. People come to you to rent the locker and then come back to return the locker key. How would you build a piece of software to manage knowing which lockers are free and which are in used? The solution, would be a queue or stack. What I'm looking for are things like "in what situation should a B-Tree be used -- What search algorithm should be used here" etc. And maybe a quick introduction of how the more complex(but commonly used) data structures/algorithms work. I tried looking at Wikipedia's list of data structures and algorithms but I think that's a bit overkill. So I'm looking more for what are the essential things I should recognize?

    Read the article

  • A Technical Perspective On Rapid Planning

    - by Robert Story
    Upcoming WebcastTitle: A Technical Perspective On Rapid PlanningDate: April 14, 2010 Time: 11:00 am EDT, 9:00 am MDT, 8:00 am PDT, 16:00 GMT Product Family: Value Chain PlanningSummary Oracle's Strategic Network Optimization (SNO) product is a powerful supply chain design and tactical planning tool.  This one-hour session is recommended for functional users who want to gain a better understanding of how Oracle's SNO solution can help you solve complex supply chain issues, including supply chain design, risk management, logistics planning, sustainability planning, and a whole lot in between! Find out how SNO can be used to solve many different types of real-world business issues. Topics will include: Risk/Disaster Management Carbon Emissions Management Global Sourcing Labor/Workforce Planning Product Mix Optimization A short, live demonstration (only if applicable) and question and answer period will be included. Click here to register for this session....... ....... ....... ....... ....... ....... .......The above webcast is a service of the E-Business Suite Communities in My Oracle Support.For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • Character progression through leveling, skills or items?

    - by Anton
    I'm working on a design for an RPG game, and I'm having some doubts about the skill and level system. I'm going for a more casual, explorative gaming experience and so thought about lowering the game complexity by simplifying character progression. But I'm having trouble deciding between the following: Progression through leveling, no complex skill progression, leveling increases base stats. Progression through skills, no leveling or base stat changes, skills progress through usage. Progression through items, more focus on stat-changing items, items confer skills, no leveling. However, I'm uncertain what the effects on gameplay might be in the end. So, my question is this: What would be the effects of choosing one of the above alternatives over the others? (Particularly with regards to the style and feel of the gameplay) My take on it is that the first sacrifices more frequent rewards and customization in favor of a simpler gameplay; the second sacrifices explicit customization and player control in favor of more frequent rewards and a somewhat simpler gameplay; while the third sacrifices inventory simplicity and a player metric in favor of player control, customization and progression simplicity. Addendum: I'm not really limiting myself to the above three, they are just the ones I liked most and am primarily interested in.

    Read the article

  • Strategy for restoring state via URL in web apps

    - by JW01
    This is a question about modern web apps, where a single page is loaded, and all subsequent navigation is done by XHR calls and modifying the DOM. We can use libraries that manipulate the hash string, which let us navigate by URL and support the back/forward buttons. But to use those libraries, we need to be able to move the UI from any one state to any other. Is there a good strategy for moving between UI states, that also allows them to be restored from scratch when you load a new URL? In a complex app, you might have a lot of different states. You don't want to reload the entire UI each time you change states. But you also don't want to require separate methods for moving from every state to each every state. Typically we need to: Restore a state from scratch, when you enter a new URL or hit Reload. Move from one state to another, when you use the Back/Forward buttons. Move from one state to another, when you perform an action within your app (like clicking a link). Move to certain states that shouldn't be added to the history, like ones that appear after form submissions. Move to some states that are built on the previous state, like a drill-down list. When you perform actions within your app, there's the additional question of which comes first: Do you change the URL, listen for the URL change, and change your state in response to it? Or do you change your state, then change the URL, but don't do anything in response? Does anyone have some experience to share on this topic?

    Read the article

  • Can the csv format be defined by a regex?

    - by Spencer Rathbun
    A colleague and I have recently argued over whether a pure regex is capable of fully encapsulating the csv format, such that it is capable of parsing all files with any given escape char, quote char, and separator char. The regex need not be capable of changing these chars after creation, but it must not fail on any other edge case. I have argued that this is impossible for just a tokenizer. The only regex that might be able to do this is a very complex PCRE style that moves beyond just tokenizing. I am looking for something along the lines of: ... the csv format is a context free grammar and as such, it is impossible to parse with regex alone ... Or am I wrong? Is it possible to parse csv with just a POSIX regex? For example, if both the escape char and the quote char are ", then these two lines are valid csv: """this is a test.""","" "and he said,""What will be, will be."", to which I replied, ""Surely not!""","moving on to the next field here..."

    Read the article

  • Is this kind of design - a class for Operations On Object - correct?

    - by Mithir
    In our system we have many complex operations which involve many validations and DB activities. One of the main Business functionality could have been designed better. In short, there were no separation of layers, and the code would only work from the scenario in which it was first designed at, and now there were more scenarios (like requests from an API or from other devices) So I had to redesign. I found myself moving all the DB code to objects which acts like Business to DB objects, and I've put all the business logic in an Operator kind of a class, which I've implemented like this: First, I created an object which will hold all the information needed for the operation let's call it InformationObject. Then I created an OperatorObject which will take the InformationObject as a parameter and act on it. The OperatorObject should activate different objects and validate or check for existence or any scenario in which the business logic is compromised and then make the operation according to the information on the InformationObject. So my question is - Is this kind of implementation correct? PS, this Operator only works on a single Business-wise Operation.

    Read the article

  • How should I implement a command processing application?

    - by Nini Michaels
    I want to make a simple, proof-of-concept application (REPL) that takes a number and then processes commands on that number. Example: I start with 1. Then I write "add 2", it gives me 3. Then I write "multiply 7", it gives me 21. Then I want to know if it is prime, so I write "is prime" (on the current number - 21), it gives me false. "is odd" would give me true. And so on. Now, for a simple application with few commands, even a simple switch would do for processing the commands. But if I want extensibility, how would I need to implement the functionality? Do I use the command pattern? Do I build a simple parser/interpreter for the language? What if I want more complex commands, like "multiply 5 until >200" ? What would be an easy way to extend it (add new commands) without recompiling? Edit: to clarify a few things, my end goal would not be to make something similar to WolframAlpha, but rather a list (of numbers) processor. But I want to start slowly at first (on single numbers). I'm having in mind something similar to the way one would use Haskell to process lists, but a very simple version. I'm wondering if something like the command pattern (or equivalent) would suffice, or if I have to make a new mini-language and a parser for it to achieve my goals?

    Read the article

  • Is it better to hard code data or find an algorithm?

    - by OghmaOsiris
    I've been working on a boardgame that has a hex grid as the board (the upper right grid in the image below) Since the board will never change and the spaces on the board will always be linked to the same other spaces around it, should I just hard code every space with the values that I need? Or should I use various algorithms to calculate links and traversals? To be more specific, my board game is a 4 player game where each player has a 5x5x5x5x5x5 hex grid (again, the upper right grid in th eimage above). The object is to get from the bottom of the grid to the top, with various obstacles in the way, and each players being able to attack eachother from the edge of their grid onto other players based on a range multiplier. Since the players grid will never change and the distance of any arbitrary space from the edge of the grid will always be the same, should I just hard code this number into each of the spaces, or should I still use a breadth first search algorithm when players are attacking? The only con I can think of for hard coding everything is that I'm going to code 9+ 2(5+6+7+8) = 61 individual cells. Is there anything else that I'm missing that I should consider using more complex algorithms?

    Read the article

  • Visual WebGui helps Dawsons put its Windows Forms HR system on the web

    - by Webgui
    Dawsons needed to upgrade their existing Windows Forms LAN human resources system to allow better and flexible access to the ever increasing data size as labor hire clients are demanding easy access to copies of tickets and licenses and etc. The company has some 30,000 applicants on file, but access to this data has been complex and limited with the current system. Therefore the IT department was asked to find a possible solutions that would allow creating a web based application while utilizing as much of the existing Windows Forms code as possible. Visual WebGui was found to be highly regarded in many frameworks comparisons so the team decided to give Visual WebGui a try. It didn’t take long for them to recognize that the Visual WebGui controls appear and react over the web the same as desktop controls. This and the fact that most of the code was directly ported which saved Dawsons hundreds of development hours are what make Visual WebGui so unique and productive. “My first impression of Visual WebGui was perhaps disbelief. Not being a seasoned Web programmer, I initially found it hard to accept so much functionality from a web based application. Also, the speed is exceptional” said John Sainsbury, Financial Controller of the Dawsons Group who added “Since working with Visual WebGui, I have showcased parts of the application to our major clients, some of whom use SAP portals, and they are amazed.” The result was so satisfying that the company is now looking to produce a mobile version for accessing the labor pool on the go. “By removing the barriers of the local network, Visual WebGui has changed how we can do business” said Lloyd Everist, General Manager Dawsons Group. Read more about this story

    Read the article

< Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >