Search Results

Search found 6605 results on 265 pages for 'complex networks'.

Page 140/265 | < Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >

  • Designing configuration for subobjects

    - by Stefano Borini
    I have the following situation: I have a class (let's call it Main) encapsulating a complex process. This class in turn orchestrates a sequence of subalgorithms (AlgoA, AlgoB), each one represented by an individual class. To configure Main, I have a configuration stored into a configuration object MainConfig. This object contains all the config information for AlgoA and AlgoB with their specific parameters. AlgoA has no interest to the information relative to the configuration of AlgoB, so technically I could have (and in practice I have) a contained MainConfig.AlgoAConfig and MainConfig.AlgoBConfig instances, and initialize as AlgoA(MainConfig.AlgoAConfig) and AlgoB(MainConfig.AlgoBConfig). The problem is that there is some common configuration data. One example is the printLevel. I currently have MainConfig.printLevel. I need to propagate this information to both AlgoA and AlgoB, because they have to know how much to print. MainConfig also needs to know how much to print. So the solutions available are I pass the MainConfig to AlgoA and AlgoB. This way, AlgoA has technically access to the whole configuration (even that of AlgoB) and is less self-contained I copy the MainConfig.printLevel into AlgoAConfig and AlgoBConfig, so I basically have three printLevel information repeated. I create a third configuration class PrintingConfig. I have an instance variable MainConfig.printingConfig, and then pass to AlgoA both MainConfig.AlgoAConfig and MainConfig.printingConfig. Have you ever found this situation? How did you solve it ? Which one is stylistically clearer to a new reader of the code ?

    Read the article

  • How-To Backup, Swap, and Update Your Wii Game Saves

    - by Jason Fitzpatrick
    Whether you want to backup your game saves because you’ve worked so hard on them or you want to import game saves precisely so you don’t have to work so hard, we’ve got you covered. Image adapted from icon set by GasClown. There are a multitude of reasons you might want to export and import game saves from your Wii including: saving the progress on your favorite games before sending in your Wii for service, copying the progress to a friend’s or your secondary Wii, and importing saved games from the web or your friend’s Wii so that you don’t have to bust your ass to unlock all the specialty items yourself. (Here’s looking at you Mario Kart and House of the Dead: Overkill.) Latest Features How-To Geek ETC How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop How Do You Block Annoying Text Message (SMS) Spam? How to Use and Master the Notoriously Difficult Pen Tool in Photoshop HTG Explains: What Are the Differences Between All Those Audio Formats? How To Use Layer Masks and Vector Masks to Remove Complex Backgrounds in Photoshop Bring Summer Back to Your Desktop with the LandscapeTheme for Chrome and Iron The Prospector – Home Dash Extension Creates a Whole New Browsing Experience in Firefox KinEmote Links Kinect to Windows Why Nobody Reads Web Site Privacy Policies [Infographic] Asian Temple in the Snow Wallpaper 10 Weird Gaming Records from the Guinness Book

    Read the article

  • JSP Include: one large bean or bean for each include

    - by shylynx
    I want to refactor a webapp that consists of very distorted JSPs and servlets. Because we can't switch to a web framework easily we have to keep JSPs and Servlets, and now we are in doubt how to include pages into another and how to setup the use:bean-directives effectively. At the first step we want to decouple the code for the core-actions and the bean-creation into servlets. The servlets should forward to their corresponding pages, which should use the bean. The problem here is, that each jsp consists of different sub- and sub-sub-jsp that are included into another. Here is a shortend extract (because reality is more complex): head header top navigation actionspanel main header actionspanel foot footer Moreover each jsp (also the header and footer) use dynamic data. For example title and actionspanel can change on each page-reload or do have links and labels that depend on the processing by the preceding servlet. I know that jsp-include-directives should only be used for static content und should be avoided for dynamic content. But here we have very large pages, that consist of many parts. Now the core questions: Should I use one big bean for each page, so that each bean holds also data for header and footer beside its core data, so that each subsequent included jsp uses the same bean-directive? For example: DirectoryJSP <- DirectoryBean CompareJSP <- CompareBean Or should I use one bean for each jsp, so that each bean only holds the data for one jsp and its own purpose. For example: DirectoryJSP <- DirectoryBean HeaderJSP <- HeaderBean FooterJSP <- FooterBean CompareJSP <- CompareBean HeaderJSP <- HeaderBean FooterJSP <- FooterBean In the second case: should the subsequent beans be a member of the corresponding parent bean, so that only the parent bean is attached as attribute to the request? Or should each bean attached to the request?

    Read the article

  • How does a segment based rendering engine work?

    - by Calmarius
    As far as I know Descent was one of the first games that featured a fully 3D environment, and it used a segment based rendering engine. Its levels are built from cubic segments (these cubes may be deformed as long as it remains convex and sides remain roughly flat). These cubes are connected by their sides. The connected sides are traversable (maybe doors or grids can be placed on these sides), while the unconnected sides are not traversable walls. So the game is played inside of this complex. Descent was software rendered and it had to be very fast, to be playable on those 10-100MHz processors of that age. Some latter levels of the game are huge and contain thousands of segments, but these levels are still rendered reasonably fast. So I think they tried to minimize the amount of cubes rendered somehow. How to choose which cubes to render for a given location? As far as I know they used a kind of portal rendering, but I couldn't find what was the technique used in this particular kind of engine. I think the fact that the levels are built from convex quadrilateral hexahedrons can be exploited.

    Read the article

  • Best practices for caching search queries

    - by David Esteves
    I am trying to improve performance of my ASP.net Web Api by adding a data cache but I am not sure how exactly to go about it as it seems to be more complex than most caching scenarios. An example is I have a table of Locations and an api to retrieve locations via search, for an autocomplete. /api/location/Londo and the query would be something like SELECT * FROM Locations WHERE Name like 'Londo%' These locations change very infrequently so I would like to cache them to prevent trips to the database for no real reason and improve the response time. Looking at caching options I am using the Windows Azure Appfabric system, the problem is it's just a key/value cache. Since I can only retrieve items based on keys I couldn't actually use it for this scenario as far as Im aware. Is what I am trying to do bad use of a caching system? Should I try looking into NoSql DB which could possibly run as a cache for something like this to improve performance? Should I just cache the entire table/collection in a single key with a specific data structure which could assist with the searching and then do the search upon retrieval of the data?

    Read the article

  • The How-To Geek Video Guide to Using Windows 7 Speech Recognition

    - by YatriTrivedi
    Ever get the desire to control your computer, Star Trek-style? With Windows 7’s Speech Recognition, it’s easier than you might think. Microsoft has been working on its voice command steadily over the years. XP introduced it, Vista smoothed it, and 7 has it polished. It’s strangely not advertised as a feature, even though other voice command and speech recognition programs are hundreds of dollars. It may not be as perfect as some of them, but there’s definitely something amazing about vocally telling your computer to do things and it actually working Latest Features How-To Geek ETC How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop How Do You Block Annoying Text Message (SMS) Spam? How to Use and Master the Notoriously Difficult Pen Tool in Photoshop HTG Explains: What Are the Differences Between All Those Audio Formats? How To Use Layer Masks and Vector Masks to Remove Complex Backgrounds in Photoshop Bring Summer Back to Your Desktop with the LandscapeTheme for Chrome and Iron The Prospector – Home Dash Extension Creates a Whole New Browsing Experience in Firefox KinEmote Links Kinect to Windows Why Nobody Reads Web Site Privacy Policies [Infographic] Asian Temple in the Snow Wallpaper 10 Weird Gaming Records from the Guinness Book

    Read the article

  • Drawing particles as a smooth blob

    - by Nömmik
    I'm new to game/graphics development and I'm playing around with particles (in 2D). I want to draw particles close to each other as a blob, just as liquid/water. I do not want to draw big circles overlapping as the blob won't be smooth (and too big). I don't really know physics but I assume what I want is something looking similar to surface tension. I haven't been able to find anything on stackexchange or on Google (maybe I do not know the correct keywords?). So far I have found two possible solutions, but I am unable to find any concrete information about algorithms. One of them is to calculate the concave hull of particles I consider being a blob. I can calculate the blob by creating an equivalence class (on the relation "close to each other"). Strangely enough I haven't been able to find any algorithm explaining how to calculate the concave hull. Many posts (and among stackexchange) links to libraries or commercial products that do this (I need libraries to work in C#), but never any algorithm. Also this solution might have a problem with a circle of particles, which would not detect the empty space in the middle. While researching concave hull I stumbled upon something called alpha shapes. Which seems to be exactly what I want to do, however just as with concave hull I haven't found any source explaining how they actually work. I have found some presentation materials but not enough to go on. It's like a big secret everyone knows except me :-/ After calculating the concave hull or alpha shape I want to make it a Bézier curve to make it smooth and nice. Although I do find my approach a bit too complex, maybe I am trying to solve this the wrong way? If you can either suggest any other solution to my problem, or explain the pieces I am missing I would be very happy and grateful :-) Thanks.

    Read the article

  • Why do some programmers think there is a contrast between theory and practice?

    - by Giorgio
    Comparing software engineering with civil engineering, I was surprised to observe a different way of thinking: any civil engineer knows that if you want to build a small hut in the garden you can just get the materials and go build it whereas if you want to build a 10-storey house you need to do quite some maths to be sure that it won't fall apart. In contrast, speaking with some programmers or reading blogs or forums I often find a wide-spread opinion that can be formulated more or less as follows: theory and formal methods are for mathematicians / scientists while programming is more about getting things done. What is normally implied here is that programming is something very practical and that even though formal methods, mathematics, algorithm theory, clean / coherent programming languages, etc, may be interesting topics, they are often not needed if all one wants is to get things done. According to my experience, I would say that while you do not need much theory to put together a 100-line script (the hut), in order to develop a complex application (the 10-storey building) you need a structured design, well-defined methods, a good programming language, good text books where you can look up algorithms, etc. So IMO (the right amount of) theory is one of the tools for getting things done. So my question is why do some programmers think that there is a contrast between theory (formal methods) and practice (getting things done)? Is software engineering (building software) perceived by many as easy compared to, say, civil engineering (building houses)? Or are these two disciplines really different (apart from mission-critical software, software failure is much more acceptable than building failure)?

    Read the article

  • Selecting a JAX-RS implementation for a new project

    - by Fernando Correia
    I'm starting a new Java project which will require a RESTful API. It will be a SaaS business application serving mobile clients. I have developed one project with Java EE 6, but I'm not very familiar with the ecosystem, since most of my experience is on the Microsoft platform. Which would be a sensible choice for a JAX-RS implementation for a new project such as described? Judging by Wikipedia's list, main contenders seem to be Jersey, Apache CXF, RESTeasy and Restlet. But the Comparison of JAX-RS Implementations cited on Wikipedia is from 2008. My first impressings from their respective homepages is that: CXF aims to be a very comprehensive solution (reminds me of WCF in the Microsoft space), which makes me think it can be more complex to understand, setup and debug than what I need; Jersey is the reference implementation and might be a good choice, but it's legacy from Sun and I'm not sure how Oracle is treating it (announcements page doesn't work and last commit notice is from 4 months ago); RESTeasy is from JBoss and probably a solid option, though I'm not sure about learning curve; Restlet seems to be popular but has a lot of history, I'm not sure how up-to-date it is in the Java EE 6 world or if it carries a heavy J2EE mindset (like lots of XML configuration). What would be the merits of each of these alternatives? What about learning curve? Feature support? Tooling (e.g. NetBeans or Eclipse wizards)? What about ease of debugging and also deployment? Is any of these project more up-to-date than the others? How stable are them?

    Read the article

  • The Other "C" in CRM

    - by [email protected]
    By Brian Dayton on April 5, 2010 7:04 PM Folks who know me know that I rarely, if ever, talk politics. And I never talk politicians. Having grown up in a household with one parent leaning left and the other leaning to the right it was the best way to keep the peace. This isn't about politics. It's about "constituents" and the need to improve the services and service levels for people--at the city, county, state/province, etc. level all the way up to national governments. As a citizen and tax payer it's also important to me that these services be provided at a reasonable cost. If there's a better and more efficient way to do something then it's my hope that a public sector organization takes advantage of technology the same way private sector companies do. Social services organizations have a complex job. They provide the services that people need, from healthcare and children's assistance to helping people find jobs. But many of these organizations are still managing these processes manually or outdated, home-grown applications that could have been written up to 30 years ago. A lot has changed in technology. On the (this is as political as I'm going to get) political front, stakeholders like you and me are expecting greater transparency on where and how funds are spent. I'll admit that most of the time, when I think about CRM systems, I think about my experience as a customer of my bank, utilities company or cable operator. But now that I'm older, have children and a house--I find myself interacting more and more with agencies and services organizations. My experiences are sometimes good and sometimes not so good. Along those lines, last week's announcement of Siebel CRM 8.2 for Public Sector caught my eye. You may not work in the public sector, but you are a constituent of some--actually a lot--of public sector organizations. I don't know which CRM systems city and county utilize but I'm going to start paying closer attention.

    Read the article

  • Are there good resources for leading documentation for an existing software product having none?

    - by Ben Rose
    Hello. I'm a software developer at a technology company. I have been tasked with leading the documentation effort for the product I work on, both internal to developers as well as spilling over into facilitating the business side of requirements documentation. This internal product has been around for at least 6 years. One challenge is that this software application has no form of documentation other than some small, outdated pieces here and there. There are comments in the code, but they are technical and do not convey any over-arching behavior (even on technical side). As a consequence of having little to no documentation, this product is often unnecessarily complex under the covers adding to the challenge. We are very limited on time that will be given to us to work on documentation. Another thing about me is that I've displayed some ability in writing/communication around the office, but I'm not coming from any sort of documentation or formal writing background (beyond my academic career). Please share your advise or recommend resources, book/website/forum/whatever, for helping me come up with a plan with milestones, best practices, task delegation, templates, buy-in, etc. I'm hoping for a resource targeting or giving special mention of introducing good documentation on existing projects where there previously was none. I would be very grateful for your responses. Ben

    Read the article

  • Unity3d: calculate the result of a transform without modifying transform object itself

    - by Heisenbug
    I'm in the following situation: I need to move an object in some way, basically rotating it around its parent local position, or translating it in its parent local space (I know how to do this). The amount of rotation and translation is know at runtime (it depends on several factors, the speed of the object, enviroment factors, etc..). The problem is the following: I can perform this transformation only if the result position of the transformed object fit some criterias. An example could be this: the distance between the position before and after the transformation must be less than a given threshold. (Actually the conditions could be several and more complex) The problem is that if I use Transform.Rotate and Transform.Translate methods of my GameObject, I will loose the original Transform values. I think I can't copy the original Transform using instantiate for performance issues. How can I perform such a task? I think I have more or less 2 possibilities: First Don't modify the GameObject position through Transform. Calculate which will be the position after the transform. If the position is legal, modify transform through Translate and Rotate methods Second Store the original transform someway. Transform the object using Translate and Rotate. If the transformed position is illegal, restore the original one.

    Read the article

  • The problems with Avoiding Smurf Naming classes with namespaces

    - by Daniel Koverman
    I pulled the term smurf naming from here (number 21). To save anyone not familiar the trouble, Smurf naming is the act of prefixing a bunch of related classes, variables, etc with a common prefix so you end up with "a SmurfAccountView passes a SmurfAccountDTO to the SmurfAccountController", etc. The solution I've generally heard to this is to make a smurf namespace and drop the smurf prefixes. This has generally served me well, but I'm running into two problems. I'm working with a library with a Configuration class. It could have been called WartmongerConfiguration but it's in the Wartmonger namespace, so it's just called Configuration. I likewise have a Configuration class which could be called SmurfConfiguration, but it is in the Smurf namespace so that would be redundant. There are places in my code where Smurf.Configuration appears alongside Wartmonger.Configuration and typing out fully qualified names is clunky and makes the code less readable. It would be nicer to deal with a SmurfConfiguration and (if it was my code and not a library) WartmongerConfiguration. I have a class called Service in my Smurf namespace which could have been called SmurfService. Service is a facade on top of a complex Smurf library which runs Smurf jobs. SmurfService seems like a better name because Service without the Smurf prefix is so incredibly generic. I can accept that SmurfService was already a generic, useless name and taking away smurf merely made this more apparent. But it could have been named Runner, Launcher, etc and it would still "feel better" to me as SmurfLauncher because I don't know what a Launcher does, but I know what a SmurfLauncher does. You could argue that what a Smurf.Launcher does should be just as apparent as a Smurf.SmurfLauncher, but I could see `Smurf.Launcher being some kind of class related to setup rather than a class that launches smurfs. If there is an open and shut way to deal with either of these that would be great. If not, what are some common practices to mitigate their annoyance?

    Read the article

  • When does a Project Manager start in a project?

    - by johndoucette
    From a colleague of mine… “As a project manager, when do you typically like to get initially involved in the project? Is it better for the PM to be rolled on during the project kick-off, the first week, or is it better to roll-on the second week when things settle down?” My textbook answer is “the Project Manager is responsible for the successful completion and delivery of the expected outcome of the project through the following major tasks;” 1.    Identifying requirements 2.    Establishing clear and achievable objectives 3.    Balancing the competing demands for quality, scope, time, and cost 4.    Adapting the specifications, plans, and approach to the different concerns and expectations of the various stakeholders However; My colleague is often a lead technical consultant coming into a project alone to help a client solve a complex problem. As Magenic consultants, we all possess many of the “project managing” skills I talked about above and tend to be responsible for item #1 and #2 as well as the actual architecture/design tasks early in a project. When the real development begins and there is no PM involved, the project will quickly get harder to execute unless items #3 & #4 are assigned to a Project Manager. In software development, the concept of context switching between coding and other administrative activities is the hardest skill perfect. In my experience, I have rarely been introduced to someone who has mastered this skill. This is the limbo I was in when I was asked to become a PM -- while still developing. “Put down the code” was not only a profound statement, but looking back – a necessary one. Unless you are lucky to have found that one developer who is a superman, asking your developers (internal corporate or consultant) to perform #3 and #4 tasks, will surely take more time, allow opportunity for more scope, and eventually cost more. Project Managers are crucial to the overall success of a project, and I prefer them to start by taking ownership of delivery on day one.

    Read the article

  • Code maintenance: keeping a bad pattern when extending new code for being consistent or not ?

    - by Guillaume
    I have to extend an existing module of a project. I don't like the way it has been done (lots of anti-pattern involved, like copy/pasted code). I don't want to perform a complete refactor. Should I: create new methods using existing convention, even if I feel it wrong, to avoid confusion for the next maintainer and being consistent with the code base? or try to use what I feel better even if it is introducing another pattern in the code ? Precison edited after first answers: The existing code is not a mess. It is easy to follow and understand. BUT it is introducing lots of boilerplate code that can be avoided with good design (resulting code might become harder to follow then). In my current case it's a good old JDBC (spring template inboard) DAO module, but I have already encounter this dilemma and I'm seeking for other dev feedback. I don't want to refactor because I don't have time. And even with time it will be hard to justify that a whole perfectly working module needs refactoring. Refactoring cost will be heavier than its benefits. Remember: code is not messy or over-complex. I can not extract few methods there and introduce an abstract class here. It is more a flaw in the design (result of extreme 'Keep It Stupid Simple' I think) So the question can also be asked like that: You, as developer, do you prefer to maintain easy stupid boring code OR to have some helpers that will do the stupid boring code at your place ? Downside of the last possibility being that you'll have to learn some stuff and maybe you will have to maintain the easy stupid boring code too until a full refactoring is done)

    Read the article

  • Convert MP3 to AAC,FLAC to AAC (.NET/C#) FREE :)

    - by PearlFactory
    So I was tasked with looking at converting 10 million tracks from mp3 320k to AAC and also Converting from mp3 320k to mp3 128k After a bit of hunting around the tool you need to use is FFMPEG Download x64 WindowsAlso for the best results get the Nero AACEncoder Download Now the command line STEP 1(From Flac)ffmpeg -i input.flac -f wav - | neroAacEnc -ignorelength -q 0.5 -if - -of output.m4aor (From mp3)ffmpeg -i input.mp3 -f wav - | neroAacEnc -ignorelength -q 0.5 -if - -of output.m4aNow the output.m4a is a intermediate state that we now put a ACC wrapper on via FFMpeg STEP 2ffmpeg -i output.m4a -vn -acodec copy final.aacDone :) There are a couple of options with the FFMPEG library as in we can look at importing the librarys and manipulation the API for the direct result FFMPEG has this support. You can get the relevant librarys from HereThey even have the source if you are that keen :-)In this case I am going to wrap the command lines into c# external process threads.( For the app that i am building to convert the 10 million tracks there is a complex multithreaded app to support this novel code )//Arrange Metadata about Call Process myProcess = new Process();ProcessStartInfo p = new ProcessStartInfo();string sArgs = string.format(" -i {0} -f wav - | neroAacEnc -ignorelength -q 0.5 -if - -of {1}",inputfile,outputfil) ; p.FileName = "ffmpeg.exe" ; p.CreateNoWindow = true; p.RedirectStandardOutput = true; //p.WindowStyle = ProcessWindowStyle.Normal p.UseShellExecute = false;//Execute p.Arguments = sArgs; myProcess.StartInfo = p; myProcess.Start(); myProcess.WaitForExit();//Write details about call  myProcess.StandardOutput.ReadToEnd();Now in this case we would execute a 2nd call using the same code but with different sArgs to put the AAC wrapper on the m4a file. Thats it. So if you need to do some conversions of any kind for you ASP.net sites/apps this is a great start and super fast.. With conversion times of around 2-3 seconds all of this can be done on the fly:-)Justin Oehlmannref : StackOverflow.com

    Read the article

  • EM12c Release 4: Database as a Service Enhancements

    - by Adeesh Fulay
    Oracle Enterprise Manager 12.1.0.4 (or simply put EM12c R4) is the latest update to the product. As previous versions, this release provides tons of enhancements and bug fixes, attributing to improved stability and quality. One of the areas that is most exciting and has seen tremendous growth in the last few years is that of Database as a Service. EM12c R4 provides a significant update to Database as a Service. The key themes are: Comprehensive Database Service Catalog (includes single instance, RAC, and Data Guard) Additional Storage Options for Snap Clone (includes support for Database feature CloneDB) Improved Rapid Start Kits Extensible Metering and Chargeback Miscellaneous Enhancements 1. Comprehensive Database Service Catalog Before we get deep into implementation of a service catalog, lets first understand what it is and what benefits it provides. Per ITIL, a service catalog is an exhaustive list of IT services that an organization provides or offers to its employees or customers. Service catalogs have been widely popular in the space of cloud computing, primarily as the medium to provide standardized and pre-approved service definitions. There is already some good collateral out there that talks about Oracle database service catalogs. The two whitepapers i recommend reading are: Service Catalogs: Defining Standardized Database Service High Availability Best Practices for Database Consolidation: The Foundation for Database as a Service [Oracle MAA] EM12c comes with an out-of-the-box service catalog and self service portal since release 1. For the customers, it provides the following benefits: Present a collection of standardized database service definitions, Define standardized pools of hardware and software for provisioning, Role based access to cater to different class of users, Automated procedures to provision the predefined database definitions, Setup chargeback plans based on service tiers and database configuration sizes, etc Starting Release 4, the scope of services offered via the service catalog has been expanded to include databases with varying levels of availability - Single Instance (SI) or Real Application Clusters (RAC) databases with multiple data guard based standby databases. Some salient points of the data guard integration: Standby pools can now be defined across different datacenters or within the same datacenter as the primary (this helps in modelling the concept of near and far DR sites) The standby databases can be single instance, RAC, or RAC One Node databases Multiple standby databases can be provisioned, where the maximum limit is determined by the version of database software The standby databases can be in either mount or read only (requires active data guard option) mode All database versions 10g to 12c supported (as certified with EM 12c) All 3 protection modes can be used - Maximum availability, performance, security Log apply can be set to sync or async along with the required apply lag The different service levels or service tiers are popularly represented using metals - Platinum, Gold, Silver, Bronze, and so on. The Oracle MAA whitepaper (referenced above) calls out the various service tiers as defined by Oracle's best practices, but customers can choose any logical combinations from the table below:  Primary  Standby [1 or more]  EM 12cR4  SI  -  SI  SI  RAC -  RAC SI  RAC RAC  RON -  RON RON where RON = RAC One Node is supported via custom post-scripts in the service template A sample service catalog would look like the image below. Here we have defined 4 service levels, which have been deployed across 2 data centers, and have 3 standardized sizes. Again, it is important to note that this is just an example to get the creative juices flowing. I imagine each customer would come up with their own catalog based on the application requirements, their RTO/RPO goals, and the product licenses they own. In the screenwatch titled 'Build Service Catalog using EM12c DBaaS', I walk through the complete steps required to setup this sample service catalog in EM12c. 2. Additional Storage Options for Snap Clone In my previous blog posts, i have described the snap clone feature in detail. Essentially, it provides a storage agnostic, self service, rapid, and space efficient approach to solving your data cloning problems. The net benefit is that you get incredible amounts of storage savings (on average 90%) all while cloning databases in a matter of minutes. Space and Time, two things enterprises would love to save on. This feature has been designed with the goal of providing data cloning capabilities while protecting your existing investments in server, storage, and software. With this in mind, we have pursued with the dual solution approach of Hardware and Software. In the hardware approach, we connect directly to your storage appliances and perform all low level actions required to rapidly clone your databases. While in the software approach, we use an intermediate software layer to talk to any storage vendor or any storage configuration to perform the same low level actions. Thus delivering the benefits of database thin cloning, without requiring you to drastically changing the infrastructure or IT's operating style. In release 4, we expand the scope of options supported by snap clone with the addition of database CloneDB. While CloneDB is not a new feature, it was first introduced in 11.2.0.2 patchset, it has over the years become more stable and mature. CloneDB leverages a combination of Direct NFS (or dNFS) feature of the database, RMAN image copies, sparse files, and copy-on-write technology to create thin clones of databases from existing backups in a matter of minutes. It essentially has all the traits that we want to present to our customers via the snap clone feature. For more information on cloneDB, i highly recommend reading the following sources: Blog by Tim Hall: Direct NFS (DNFS) CloneDB in Oracle Database 11g Release 2 Oracle OpenWorld Presentation by Cern: Efficient Database Cloning using Direct NFS and CloneDB The advantages of the new CloneDB integration with EM12c Snap Clone are: Space and time savings Ease of setup - no additional software is required other than the Oracle database binary Works on all platforms Reduce the dependence on storage administrators Cloning process fully orchestrated by EM12c, and delivered to developers/DBAs/QA Testers via the self service portal Uses dNFS to delivers better performance, availability, and scalability over kernel NFS Complete lifecycle of the clones managed by EM12c - performance, configuration, etc 3. Improved Rapid Start Kits DBaaS deployments tend to be complex and its setup requires a series of steps. These steps are typically performed across different users and different UIs. The Rapid Start Kit provides a single command solution to setup Database as a Service (DBaaS) and Pluggable Database as a Service (PDBaaS). One command creates all the Cloud artifacts like Roles, Administrators, Credentials, Database Profiles, PaaS Infrastructure Zone, Database Pools and Service Templates. Once the Rapid Start Kit has been successfully executed, requests can be made to provision databases and PDBs from the self service portal. Rapid start kit can create complex topologies involving multiple zones, pools and service templates. It also supports standby databases and use of RMAN image backups. The Rapid Start Kit in reality is a simple emcli script which takes a bunch of xml files as input and executes the complete automation in a matter of seconds. On a full rack Exadata, it took only 40 seconds to setup PDBaaS end-to-end. This kit works for both Oracle's engineered systems like Exadata, SuperCluster, etc and also on commodity hardware. One can draw parallel to the Exadata One Command script, which again takes a bunch of inputs from the administrators and then runs a simple script that configures everything from network to provisioning the DB software. Steps to use the kit: The kit can be found under the SSA plug-in directory on the OMS: EM_BASE/oracle/MW/plugins/oracle.sysman.ssa.oms.plugin_12.1.0.8.0/dbaas/setup It can be run from this default location or from any server which has emcli client installed For most scenarios, you would use the script dbaas/setup/database_cloud_setup.py For Exadata, special integration is provided to reduce the number of inputs even further. The script to use for this scenario would be dbaas/setup/exadata_cloud_setup.py The database_cloud_setup.py script takes two inputs: Cloud boundary xml: This file defines the cloud topology in terms of the zones and pools along with host names, oracle home locations or container database names that would be used as infrastructure for provisioning database services. This file is optional in case of Exadata, as the boundary is well know via the Exadata system target available in EM. Input xml: This file captures inputs for users, roles, profiles, service templates, etc. Essentially, all inputs required to define the DB services and other settings of the self service portal. Once all the xml files have been prepared, invoke the script as follows for PDBaaS: emcli @database_cloud_setup.py -pdbaas -cloud_boundary=/tmp/my_boundary.xml -cloud_input=/tmp/pdb_inputs.xml          The script will prompt for passwords a few times for key users like sysman, cloud admin, SSA admin, etc. Once complete, you can simply log into EM as the self service user and request for databases from the portal. More information available in the Rapid Start Kit chapter in Cloud Administration Guide.  4. Extensible Metering and Chargeback  Last but not the least, Metering and Chargeback in release 4 has been made extensible in all possible regards. The new extensibility features allow customer, partners, system integrators, etc to : Extend chargeback to any target type managed in EM Promote any metric in EM as a chargeback entity Extend list of charge items via metric or configuration extensions Model abstract entities like no. of backup requests, job executions, support requests, etc  A slew of emcli verbs have also been added that allows administrators to create, edit, delete, import/export charge plans, and assign cost centers all via the command line. More information available in the Chargeback API chapter in Cloud Administration Guide. 5. Miscellaneous Enhancements There are other miscellaneous, yet important, enhancements that are worth a mention. These mostly have been asked by customers like you. These are: Custom naming of DB Services Self service users can provide custom names for DB SID, DB service, schemas, and tablespaces Every custom name is validated for uniqueness in EM 'Create like' of Service Templates Now creating variants of a service template is only a click away. This would be vital when you publish service templates to represent different database sizes or service levels. Profile viewer View the details of a profile like datafile, control files, snapshot ids, export/import files, etc prior to its selection in the service template Cleanup automation - for failed and successful requests Single emcli command to cleanup all remnant artifacts of a failed request Cleanup can be performed on a per request bases or by the entire pool As an extension, you can also delete successful requests Improved delete user workflow Allows administrators to reassign cloud resources to another user or delete all of them Support for multiple tablespaces for schema as a service In addition to multiple schemas, user can also specify multiple tablespaces per request I hope this was a good introduction to the new Database as a Service enhancements in EM12c R4. I encourage you to explore many of these new and existing features and give us feedback. Good luck! References: Cloud Management Page on OTN Cloud Administration Guide [Documentation] -- Adeesh Fulay (@adeeshf)

    Read the article

  • JQuery / JSON + .Net Service Layer - to WCF or Not to WCF?

    - by hanzolo
    I Recently had a discussion with a colleague of mine about the pros / cons of WCF. He mentioned about how much code is generated to support WCF, and also the overhead required. It was mentioned that a simple jQuery /Ajax post to a .aspx page (or a handler for that matter) that returns JSON would work more efficiently and takes much less code to implement. I am also aware of the new WCF Web API and feel that technology may solve the "bloated"-ness required in attaining a proxy etc... by just outputting JSON. So when developing a relational DB (MSSQL) storage model, with a fairly complex Business Layer (C#) and Data Access Layer (EntityFW).. what's a good technology for creating a "service layer" which will spit out View Models represented in JSON, with a CQRS(Command Query..) approach in mind.. The app would use the service layer to support it's required UI, as well as provide an available subset of services (outputting JSON data) for service subscribers.. In other words an admin panel to support the admin UI, and service endpoints that return JSON to access the configurations made from the administration UI. What are some potential technologies to use as the transport / communication layer. I'd like to use a pure RESTful approach, but am not against doing some URL rewriting with IIS. Obviously some of the available technologies are: WCF WCF Web API (should this even be separate?) Straight request / response (query string to .aspx / handler) Would using MVC .Net solve this entire problem? maybe their single page app approach? any suggestions / feedback from developing this type of application? Thanks,

    Read the article

  • How to balance a non-symmetric "extension" based game?

    - by Klaim
    Most strategy games have fixed units and possible behaviours. However, think of a game like Magic The Gathering : each card is a set of rules. Regularly, new sets of card types are created. I remember that the firsts editions of the game have been said to be prohibited in official tournaments because the cards were often too powerful. Later extensions of the game provided more subtle effects/rules in cards and they managed to balance the game apparently effectively, even if there is thousands of different cards possible. I'm working on a strategy game that is a bit in the same position : every units are provided by extensions and the game is thought to be extended for some years, at least. The effects variety of the units are very large even with some basic design limitations set to be sure it's manageable. Each player choose a set of units to play with (defining their global strategy) before playing (like chooseing a themed deck of Magic cards). As it's a strategy game (you can think of Magic as a strategy game too in some POV), it's essentially skirmish based so the game have to be fair, even if the players don't choose the same units before starting to play. So, how do you proceed to balance this type of non-symmetric (strategy) game when you know it will always be extended? For the moment, I'm trying to apply those rules but I'm not sure it's right because I don't have enough design experience to know : each unit would provide one unique effect; each unit should have an opposite unit that have an opposite effect that would cancel each others; some limitations based on the gameplay; try to get a lot of beta tests before each extension release? Looks like I'm in the most complex case?

    Read the article

  • SQL SERVER 2008 – 2011 – Declare and Assign Variable in Single Statement

    - by pinaldave
    Many of us are tend to overlook simple things even if we are capable of doing complex work. In SQL Server 2008, inline variable assignment is available. This feature exists from last 3 years, but I hardly see its utilization. One of the common arguments was that as the project migrated from the earlier version, the feature disappears. I totally accept this argument and acknowledge it. However, my point is that this new feature should be used in all the new coding – what is your opinion? The code which we used in SQL Server 2005 and the earlier version is as follows: DECLARE @iVariable INT, @vVariable VARCHAR(100), @dDateTime DATETIME SET @iVariable = 1 SET @vVariable = 'myvar' SET @dDateTime = GETDATE() SELECT @iVariable iVar, @vVariable vVar, @dDateTime dDT GO The same should be re-written as following: DECLARE @iVariable INT = 1, @vVariable VARCHAR(100) = 'myvar', @dDateTime DATETIME = GETDATE() SELECT @iVariable iVar, @vVariable vVar, @dDateTime dDT GO I have started to use this new method to assign variables as I personally find it very easy to read as well write. Do you still use the earlier method to declare and assign variables? If yes, is there any particular reason or just an old routine? I am interested to hear about this. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Learn How Ancestry.com Helps Families Uncover Their History with Oracle WebCenter

    - by Christie Flanagan
    Delivering Exceptional Online Customer ExperiencesAncestry.com is the world’s largest online family history resource, providing an engaging and interactive customer experience to more than 1.7 million members. With smart search technology, a wealth of learning resources, and a worldwide community of family history enthusiasts, Ancestry.com helps people discover their roots and tell their unique family stories. Key to Ancestry.com’s success has been the delivery of an online customer experience that converts site visitors into paying subscribers and keeps them coming back. To help achieve this goal, Ancestry.com turned to Oracle’s Web experience management solution, Oracle WebCenter Sites. Join us as executives from Ancestry.com and Oracle discuss how Oracle’s Web experience management solution is helping them deliver engaging online experiences. Learn how: Ancestry.com selected Oracle WebCenter Sites to meet their demanding Web experience management requirements The company was able to get up and running quickly despite a complex technology stack and challenging integration requirements with legacy systems Ancestry.com empowered business users to manage the online experience and significantly reduce time to market for their online campaigns and initiatives Register now for the Webcast. REGISTER NOW Thursday,June 28, 201210 a.m. PT / 1 p.m. ET Presented by: Blane Nelson Chief Architect–Applications,Ancestry.com Christie FlanaganDirector of Product Marketing, Oracle WebCenter Sites,Oracle

    Read the article

  • Oracle in Romania - 1: Brand new office, "Greenest in Bucharest"

    - by Steve Walker
    The importance of Romania within Oracle's global operations was underlined the other day as a marvellous new office building was opened at Floreasca Park in Bucharest.  The importance of the new facility was further underlined by presence of Oracle President Safra Catz, who participated in the opening ceremony. Seen here opening the building alongside Oracle Romania country leader Sorin Mindrutescu, Safra Catz said, "Our presence in Bucharest is significant and the work our teams are doing here is hugely valuable to our company and to our customers and partners. Our expansion in Bucharest signals our success in the region and commitment to making a positive contribution to the Romanian economy." The office itself looks very impressive, as the photos above show.  But more importantly, it is a cutting edge "green" office building in Bucharest, offering modern, environmentally friendly solutions such as a geo-thermal pump for heating and cooling, eco-friendly and chemical free materials used in walls and floors, a complex shading system, a bio diversity garden, and water and electricity saving equipment throughout the building. Floreasca Park is styled "the greenest office building in Bucharest" and its environmental credentials are laid out in full in a comprehensive infographic. Finally, Oracle's commitment to its Romanian operation was recognised as the company is proud to have been voted the most desired employer in Romania in surveys conducted by Catalyst Solutions and Brainspotting Consultancy. So, here's to the success of the Romanian operation, an important part of Oracle's global business and further testament to the importance of EMEA's contribution to the company's success. Further links: Photos from the opening ceremony Press release Infographic about the Floreasca Park building

    Read the article

  • How to manage a developer who has poor communication skills

    - by djcredo
    I manage a small team of developers on an application which is in the mid-point of its lifecycle, within a big firm. This unfortunately means there is commonly a 30/70 split of Programming tasks to "other technical work". This work includes: Working with DBA / Unix / Network / Loadbalancer teams on various tasks Placing & managing orders for hardware or infrastructure in different regions Running tests that have not yet been migrated to CI Analysis Support / Investigation Its fair to say that the Developers would all prefer to be coding, rather than doing these more mundane tasks, so I try to hand out the fun programming jobs evenly amongst the team. Most of the team was hired because, though they may not have the elite programming skills to write their own compiler / game engine / high-frequency trading system etc., they are good communicators who "can get stuff done", work with other teams, and somewhat navigate the complex beaurocracy here. They are good developers, but they are also good all-round technical staff. However, one member of the team probably has above-average coding skills, but below-average communication skills. Traditionally, the previous Development Manager tended to give him the Programming tasks and not the more mundane tasks listed above. However, I don't feel that this is fair to the rest of the team, who have shown an aptitute for developing a well-rounded skillset that is commonly required in a big-business IT department. What should I do in this situation? If I continue to give him more programming work, I know that it will be done faster (and conversly, I would expect him to complete the other work slower). But it goes against my principles, and promotes the idea that you can carve out a "comfortable niche" for yourself simply by being bad at the tasks you don't like.

    Read the article

  • How To Change Window Transparency in Windows 7 with a Hotkey

    - by YatriTrivedi
    Linux has a lot of eye-candy because of Compiz, my favorite of which is the window opacity plugin. Using a short AutoHotKey script, you can add that same functionality to Windows 7. I used this AHK script as a basis for changing opacity. It uses a single hotkey to change the active window’s transparency by 25% each time until it resets to 0. I wanted functionality similar to Compiz, so I modified the script to use the mouse wheel and shortened the increments to get more variety. Just hold down the Windows key and scroll down to see through the window. This decreases opacity and makes windows more transparent. Latest Features How-To Geek ETC Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions Hack Apart a Highlighter to Create UV-Reactive Flowers [Science] Add a “Textmate Style” Lightweight Text Editor with Dropbox Syncing to Chrome and Iron Is the Forcefield Really On or Not? [Star Wars Parody Video] Google Updates Picasa Web Albums; Emphasis on Sharing and Showcasing Uwall.tv Turns YouTube into a Video Jukebox Early Morning Sunrise at the Beach Wallpaper

    Read the article

  • Decoupling software components via naming convention

    - by csteinmueller
    I'm currently evaluating alternatives to refactor a drivermanagement. In my multitier architecture I have Baseclass DAL.Device //my entity Interfaces BL.IDriver //handles the dataprocessing between application and device BL.IDriverCreator //creates an IDriver from a Device BL.IDriverFactory //handles the driver creation requests Every specialization of Device has a corresponding IDriver implementation and a corresponding IDriverCreator implementation. At the moment the mapping is fix via a type check within the business layer / DriverFactory. That means every new driver needs a) changing code within the DriverFactory and b) referencing the new IDriver implementation / assembly. On a customers point of view that means, every new driver, used or not, needs a complex revalidation of their hardware environment, because it's a critical process. My first inspiration was to use a caliburn micro like nameconvention see Caliburn.Micro: Xaml Made Easy BL.RestDriver BL.RestDriverCreator DAL.RestDevice After receiving the RestDevicewithin the IDriverFactory I can load all driver dlls via reflection and do a namesplitting/comparing (extracting the xx from xxDriverCreator and xxDevice) Another idea would be a custom attribute (which also leads to comparing strings). My question: is that a good approach above layer borders? If not, what would be a good approach?

    Read the article

< Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >