Search Results

Search found 8692 results on 348 pages for 'patterns and practices'.

Page 48/348 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • Looking for some OO design advice

    - by Andrew Stephens
    I'm developing an app that will be used to open and close valves in an industrial environment, and was thinking of something simple like this:- public static void ValveController { public static void OpenValve(string valveName) { // Implementation to open the valve } public static void CloseValve(string valveName) { // Implementation to close the valve } } (The implementation would write a few bytes of data to the serial port to control the valve - an "address" derived from the valve name, and either a "1" or "0" to open or close the valve). Another dev asked whether we should instead create a separate class for each physical valve, of which there are dozens. I agree it would be nicer to write code like PlasmaValve.Open() rather than ValveController.OpenValve("plasma"), but is this overkill? Also, I was wondering how best to tackle the design with a couple of hypothetical future requirements in mind:- We are asked to support a new type of valve requiring different values to open and close it (not 0 and 1). We are asked to support a valve that can be set to any position from 0-100, rather than simply "open" or "closed". Normally I would use inheritance for this kind of thing, but I've recently started to get my head around "composition over inheritance" and wonder if there is a slicker solution to be had using composition?

    Read the article

  • How to develop "Client script library" for ASP.net controls and how do these work?

    - by Niranjan Kala
    I have been working on .Net platform for 2 years and right now I am working on DevExpress controls for 6 months. All these control have client-side Events which are under some ClientScript nameSpace of particular control, Which specify ClientInstanceName, methods and properties accessible at client side. For example Button1 is ClientInstanceName and Button1.Text is a property, with methods like these: Button1.SetValue(); Button1.GetValue(); In ASP.Net Controls, buttons have the ClientClick event that fires before the Server Side Click event. I have inspected and goggled to extend client side functionality in asp.net controls. For example: create a ClientInstanceName property for controls or CheckedChanged event for CheckBox / RadioButton control. I have tried using these MSDN articles: Injecting Client-Side Script from an ASP.NET Server Control Working with Client-Side Script I got much information and ideas from these articles on how to implement/extend these. All are working in the client side. protected override void AddAttributesToRender(HtmlTextWriter writer) { base.AddAttributesToRender(writer); string script = @"return confirm(""%%POPUP_MESSAGE%%"");"; script = script.Replace("%%POPUP_MESSAGE%%", this.PopupMessage.Replace("\"", "\\\"")); writer.AddAttribute(HtmlTextWriterAttribute.Onclick, script); } Here It is just setting up attribute to the button. but all client side interaction no control from server. Here is that I want to know: How can I implement such functionality to create methods, properties etc. on client side. For example I am creating a PopControl as in the above code snippet same behavior as like Ajax ModalPopupExtender That have OK Button related properties. Ajax Controls can be directed to perform work from server side code e.g. Popup1.show(); How can I do this with such client enabled controls implemented controls as windows do? I am learning creation of Ajax Controls but I do not want to use ScriptManager or depend on another control. Just some extension to standard controls. I am expecting for ideas and implementation methods for such functionality.

    Read the article

  • Filtering your offices IPs from Google Analytics when each has a dynamic IP?

    - by leeand00
    I found the documentation for filtering IPs from Google Analytics, but the address of the several locations of our company all have dynamic IP addresses that change every 30 days from what I'm told. I know from working with Dynamic DNS that the provider usually gives you a script that you configure your router to run when it's IP address changes or when it is restarted, which passes the new IP address to the DDNS server. I'm wondering if there might be a way to write or use a preexisting script to do the same thing with the Google Analytics API.

    Read the article

  • Why do some programmers think there is a contrast between theory and practice?

    - by Giorgio
    Comparing software engineering with civil engineering, I was surprised to observe a different way of thinking: any civil engineer knows that if you want to build a small hut in the garden you can just get the materials and go build it whereas if you want to build a 10-storey house you need to do quite some maths to be sure that it won't fall apart. In contrast, speaking with some programmers or reading blogs or forums I often find a wide-spread opinion that can be formulated more or less as follows: theory and formal methods are for mathematicians / scientists while programming is more about getting things done. What is normally implied here is that programming is something very practical and that even though formal methods, mathematics, algorithm theory, clean / coherent programming languages, etc, may be interesting topics, they are often not needed if all one wants is to get things done. According to my experience, I would say that while you do not need much theory to put together a 100-line script (the hut), in order to develop a complex application (the 10-storey building) you need a structured design, well-defined methods, a good programming language, good text books where you can look up algorithms, etc. So IMO (the right amount of) theory is one of the tools for getting things done. So my question is why do some programmers think that there is a contrast between theory (formal methods) and practice (getting things done)? Is software engineering (building software) perceived by many as easy compared to, say, civil engineering (building houses)? Or are these two disciplines really different (apart from mission-critical software, software failure is much more acceptable than building failure)?

    Read the article

  • Explaining Explain Plan Notes for Auto DOP

    - by jean-pierre.dijcks
    I've recently gotten some questions around "why do I not see a parallel plan" while Auto DOP is on (I think)...? It is probably worthwhile to quickly go over some of the ways to find out what Auto DOP was thinking. In general, there is no need to go tracing sessions and look under the hood. The thing to start with is to do an explain plan on your statement and to look at the parameter settings on the system. Parameter Settings to Look At First and foremost, make sure that parallel_degree_policy = AUTO. If you have that parameter set to LIMITED you will not have queuing and we will only do the auto magic if your objects are set to default parallel (so no degree specified). Next you want to look at the value of parallel_degree_limit. It is typically set to CPU, which in default settings equates to the Default DOP of the system. If you are testing Auto DOP itself and the impact it has on performance you may want to leave it at this CPU setting. If you are running concurrent statements you may want to give this some more thoughts. See here for more information. In general, do stick with either CPU or with a specific number. For now avoid the IO setting as I've seen some mixed results with that... In 11.2.0.2 you should also check that IO Calibrate has been run. Best to simply do a: SQL> select * from V$IO_CALIBRATION_STATUS; STATUS        CALIBRATION_TIME ------------- ---------------------------------------------------------------- READY         04-JAN-11 10.04.13.104 AM You should see that your IO Calibrate is READY and therefore Auto DOP is ready. In any case, if you did not run the IO Calibrate step you will get the following note in the explain plan: Note -----    - automatic DOP: skipped because of IO calibrate statistics are missing One more note on calibrate_io, if you do not have asynchronous IO enabled you will see:  ERROR at line 1: ORA-56708: Could not find any datafiles with asynchronous i/o capability ORA-06512: at "SYS.DBMS_RMIN", line 463 ORA-06512: at "SYS.DBMS_RESOURCE_MANAGER", line 1296 ORA-06512: at line 7 While this is changed in some fixes to the calibrate procedure, you should really consider switching asynchronous IO on for your data warehouse. Explain Plan Explanation To see the notes that are shown and explained here (and the above little snippet ) you can use a simple explain plan mechanism. There should  be no need to add +parallel etc. explain plan for <statement> SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY()); Auto DOP The note structure displaying why Auto DOP did not work (with the exception noted above on IO Calibrate) is like this: Automatic degree of parallelism is disabled: <reason> These are the reason codes: Parameter -  parallel_degree_policy = manual which will not allow Auto DOP to kick in  Hint - One of the following hints are used NOPARALLEL, PARALLEL(1), PARALLEL(MANUAL) Outline - A SQL outline of an older version (before 11.2) is used SQL property restriction - The statement type does not allow for parallel processing Rule-based mode - Instead of the Cost Based Optimizer the system is using the RBO Recursive SQL statement - The statement type does not allow for parallel processing pq disabled/pdml disabled/pddl disabled - For some reason (alter session?) parallelism is disabled Limited mode but no parallel objects referenced - your parallel_degree_policy = LIMITED and no objects in the statement are decorated with the default PARALLEL degree. In most cases all objects have a specific degree in which case Auto DOP will honor that degree. Parallel Degree Limited When Auto DOP does it works you may see the cap you imposed with parallel_degree_limit showing up in the note section of the explain plan: Note -----    - automatic DOP: Computed Degree of Parallelism is 16 because of degree limit This is an obvious indication that your are being capped for this statement. There is one quite interesting one that happens when you are being capped at DOP = 1. First of you get a serial plan and the note changes slightly in that it does not indicate it is being capped (we hope to update the note at some point in time to be more specific). It right now looks like this: Note -----    - automatic DOP: Computed Degree of Parallelism is 1 Dynamic Sampling With 11.2.0.2 you will start seeing another interesting change in parallel plans, and since we are talking about the note section here, I figured we throw this in for good measure. If we deem the parallel (!) statement complex enough, we will enact dynamic sampling on your query. This happens as long as you did not change the default for dynamic sampling on the system. The note looks like this: Note ----- - dynamic sampling used for this statement (level=5)

    Read the article

  • Should I add parameters to instance methods that use those instance fields as parameters?

    - by john smith optional
    I have an instance method that uses instance fields in its work. I can leave the method without that parameters as they're available to me, or I can add them to the parameter list, thus making my method more "generic" and not reliable on the class. On the other hand, additional parameters will be in parameters list. Which approach is preferable and why? Edit: at the moment I don't know if my method will be public or private. Edit2: clarification: both method and fields are instance level.

    Read the article

  • Buzzword for "performance-aware" software development

    - by errantlinguist
    There seems to be an overabundance of buzzwords for software development styles and methodologies: Agile development, extreme programming, test-driven development, etc... well, is there any sort of buzzword for "performance-aware" development? By "performance awareness", I don't necessarily mean low-latency or low-level programming, although the former would logically fall under the blanket term I'm looking for. I mean development in which resources are recognised to be finite and so there is a general emphasis on low computational complexity, good resource management, etc. If I was to be snarky, I would say "good programming", but that doesn't seem to get the message across so well...

    Read the article

  • Back in Atlanta! Wed, Feb 9 2011

    - by KKline
    I always enjoy spending time with my friends from Atlanta, as well as meeting folks and making new friends. If you live in the Atlanta area, I hope you'll join me on the evening of Wednesday, February 9th, 2011. Details are at the Atlanta SQL Server user group website . It's common knowledge that I have a terrible memory for many things. However, one of the few things that my memory is usually really good at is remember names & faces (and remembering stories, but that is another story as well)....(read more)

    Read the article

  • Meaningful concise method naming guidelines

    - by Sam
    Recently I started releasing an open source project, while I was the only user of the library I did not care about the names, but know I want to assign clever names to each methods to make it easier to learn, but I also need to use concise names so they are easy to write as well. I was thinking about some guidelines about the naming, I am aware of lots of guidelines that only care about letters casing or some simple notes. Here, I am looking after guidelines for meaningful concise naming. For example, this could be part of the guidelines I am looking after: Use Add when an existing item is going to be added to a target, Use Create when a new item is being created and added to a target. Use Remove when an existing item is going to be removed from a target, Use delete when an item is going to be removed permanently. Pair AddXXX methods with RemoveXXX and Pair CreateXXX methods with DeleteXXX methods, but do not mix them. The above guidance may be intuitive for native English speakers, but for me that English is my second language I need to be told about things like this.

    Read the article

  • .NET - refactoring code

    - by w0051977
    I have inherited and now further develop a large application consisting of an ASP.NET application, VB6 and VB.NET application. The software was poorly written. I am trying to refactor the code as I go along. The changes I am making are not live (they are contained in a folder on my development machine). This is proving to be time consuming and I am doing this along side other work which is the prioritiy. My question is: is this a practical approach or is there a better methodology for refactoring code? I don't have any experience with version control software or source control software and I am wandering if this is what I am missing. I am a sole developer.

    Read the article

  • How do I overcome paralysis by analysis when coding?

    - by LuxuryMode
    When I start a new project, I often times immediately start thinking about the details of implementation. "Where am I gonna put the DataBaseHandler? How should I use it? Should classes that want to use it extend from some Abstract superclass..? Should I an interface? What level of abstraction am I going to use in my class that contains methods for sending requests and parsing data?" I end up stalling for a long time because I want to code for extensibility and reusability. But I feel it almost impossible to get past thinking about how to implement perfectly. And then, if I try to just say "screw it, just get it done!", I hit a brick wall pretty quickly because my code isn't organized, I mixed levels of abstractions, etc. What are some techniques/methods you have for launching into a new project while also setting up a logical/modular structure that will scale well?

    Read the article

  • Modular enterprise architecture using MVC and Orchard CMS

    - by MrJD
    I'm making a large scale MVC application using Orchard. And I'm going to be separating my logic into modules. I'm also trying to heavily decouple the application for maximum extensibility and testability. I have a rudimentary understanding of IoC, Repository Pattern, Unit of Work pattern and Service Layer pattern. I've made myself a diagram. I'm wondering if it is correct and if there is anything I have missed regarding an extensible application. Note that each module is a separate project. Update So I have many UI modules that use the db module, that's why they've been split up. There are other services the UI modules will use. The UI modules have been split up because they will be made over time, independent of each other.

    Read the article

  • What is the best practice for when to check if something needs to be done?

    - by changokun
    Let's say I have a function that does x. I pass it a variable, and if the variable is not null, it does some action. And I have an array of variables and I'm going to run this function on each one. Inside the function, it seems like a good practice is to check if the argument is null before proceeding. A null argument is not an error, it just causes an early return. I could loop through the array and pass each value to the function, and the function will work great. Is there any value to checking if the var is null and only calling the function if it is not null during the loop? This doubles up on the checking for null, but: Is there any gained value? Is there any gain on not calling a function? Any readability gain on the loop in the parent code? For the sake of my question, let's assume that checking for null will always be the case. I can see how checking for some object property might change over time, which makes the first check a bad idea. Pseudo code example: for(thing in array) { x(thing) } Versus: for(thing in array) { if(thing not null) x(thing) } If there are language-specific concerns, I'm a web developer working in PHP and JavaScript.

    Read the article

  • Class Design and Structure Online Web Store

    - by Phorce
    I hope I have asked this in the right forum. Basically, we're designing an Online Store and I am designing the class structure for ordering a product and want some clarification on what I have so far: So a customer comes, selects their product, chooses the quantity and selects 'Purchase' (I am using the Facade Pattern - So subsystems execute when this action is performed). My class structure: < Order > < Product > <Customer > There is no inheritance, more Association < Order has < Product , < Customer has < Order . Does this structure look ok? I've noticed that I don't handle the "Quantity" separately, I was just going to add this into the "Product" class, but, do you think it should be a class of it's own? Hope someone can help.

    Read the article

  • In defense of SELECT * in production code, in some limited cases?

    - by Alexander Kuznetsov
    It is well known that SELECT * is not acceptable in production code, with the exception of this pattern: IF EXISTS( SELECT * We all know that whenever we see code code like this: Listing 1. "Bad" SQL SELECT Column1 , Column2 FROM ( SELECT c. * , ROW_NUMBER () OVER ( PARTITION BY Column1 ORDER BY Column2 ) AS rn FROM data.SomeTable AS c ) AS c WHERE rn < 5 we are supposed to automatically replace * with an explicit list of columns, as follows: Listing 2. "Good" SQL SELECT Column1 , Column2 FROM...(read more)

    Read the article

  • Caching strategies for entities and collections

    - by Rob West
    We currently have an application framework in which we automatically cache both entities and collections of entities at the business layer (using .NET cache). So the method GetWidget(int id) checks the cache using a key GetWidget_Id_{0} before hitting the database, and the method GetWidgetsByStatusId(int statusId) checks the cache using GetWidgets_Collections_ByStatusId_{0}. If the objects are not in the cache they are retrieved from the database and added to the cache. This approach is obviously quick for read scenarios, and as a blanket approach is quick for us to implement, but requires large numbers of cache keys to be purged when CRUD operations are carried out on entities. Obviously as additional methods are added this impacts performance and the benefits of caching diminish. I'm interested in alternative approaches to handling caching of collections. I know that NHibernate caches a list of the identifiers in the collection rather than the actual entities. Is this an approach other people have tried - what are the pros and cons? In particular I am looking for options that optimise performance and can be implemented automatically through boilerplate generated code (we have our own code generation tool). I know some people will say that caching needs to be done by hand each time to meet the needs of the specific situation but I am looking for something that will get us most of the way automatically.

    Read the article

  • Constructs for wrapping a hardware state machine

    - by Henry Gomersall
    I am using a piece of hardware with a well defined C API. The hardware is stateful, with the relevant API calls needing to be in the correct order for the hardware to work properly. The API calls themselves will always return, passing back a flag that advises whether the call was successful, or if not, why not. The hardware will not be left in some ill defined state. In effect, the API calls advise indirectly of the current state of the hardware if the state is not correct to perform a given operation. It seems to be a pretty common hardware API style. My question is this: Is there a well established design pattern for wrapping such a hardware state machine in a high level language, such that consistency is maintained? My development is in Python. I ideally wish the hardware state machine to be abstracted to a much simpler state machine and wrapped in an object that represents the hardware. I'm not sure what should happen if an attempt is made to create multiple objects representing the same piece of hardware. I apologies for the slight vagueness, I'm not very knowledgeable in this area and so am fishing for assistance of the description as well!

    Read the article

  • How do game programmers design their classes to reuse in AI, network and play and pass mode?

    - by Amogh Talpallikar
    For a two player game where, your opponent could be on the network, CPU itself or near you where you would play turn by turn on the same machine. How do people design classes for re-use ? I am in a similar situation and have no experience in making such complex games. But here is what I have thought, If I am a player object , I should only be interacting with the GameManager or GameEngine Singleton , from which I will get various notifications about the game status. I dont care where and who my opponent is, this GameManager depending upon the game mode, will interact with gameNetworkManager , or AI tell me what the opponent played. I am not sure about the scenario where we play and pass [turn by turn on same machine]. Hoping for a brief but clear explanation or at least a link to a similar resource.:)

    Read the article

  • How do I improve my code reading skills

    - by Andrew
    Well the question is in the title - how do I improve my code reading skills. The software/hardware environment I currently do development in is quite slow with respect to compilation times and time it takes the whole system to test. The system is quite old/complex and thus splitting it into a several smaller, more manageable sub-projects is not feasible in a neare future. I have realized is what really hinders the development progress is my code reading skills. How do I improve my code reading skills, so I can spot most of the errors and issues in the code even before I hit the "do compile" key, even before I start the debugger?

    Read the article

  • Approach on Software Development Architecture

    - by john ryan
    Hi i am planning to standardize our way of creating project for our new projects. Currently we are using 3tier architecture where we have our ClassLibrary Project where it includes our Data Access Layer and Business Layer Something like: Solution ClassLibrary >ClassLibrary Project : >DAL(folder) > DAL Classes >BAL(folder) > BAL Classes And this Class Library dll was reference on our presentation Layer Project which are the Application(web/desktop) Something like: Solution WebUniversitySystem >Libraries(folder) > ClassLibrary.dll >WebUniversitySystem(Project): >Reference ClassLibrary.dll >Pages etc... Now i am planning to do is something like: Solution WebUniversitySystem >DataAccess(Project) >BusinesLayer(Project) >Reference DAL >WebUniversitySystem(Project): >Reference BAL >Pages etc... Is this Ok ? Or there is a good Approach that we can follow? Thanks In Regards

    Read the article

  • What is the rationale behind Apache Jena's *everything is an interface if possible* design philosophy?

    - by David Cowden
    If you are familiar with the Java RDF and OWL engine Jena, then you have run across their philosophy that everything should be specified as an interface when possible. This means that a Resource, Statement, RDFNode, Property, and even the RDF Model, etc., are, contrary to what you might first think, Interfaces instead of concrete classes. This leads to the use of Factories quite often. Since you can't instantiate a Property or Model, you must have something else do it for you --the Factory design pattern. My question, then, is, what is the reasoning behind using this pattern as opposed to a traditional class hierarchy system? It is often perfectly viable to use either one. For example, if I want a memory backed Model instead of a database-backed Model I could just instantiate those classes, I don't need ask a Factory to give me one. As an aside, I'm in the process of writing a library for manipulating Pearltrees data, which is exported from their website in the form of an RDF/XML document. As I write this library, I have many options for defining the relationships present in the Peartrees data. What is nice about the Pearltrees data is that it has a very logical class system: A tree is made up of pearls, which can be either Page, Reference, Alias, or Root pearls. My question comes from trying to figure out if I should adopt the Jena philosophy in my library which uses Jena, or if I should disregard it, pick my own design philosophy, and stick with it.

    Read the article

  • Which language and platform features really boosted your coding speed?

    - by Serge
    The question is about delivering working code faster without any regard for design, quality, maintainability, etc. Here is the list of things that help me to write and read code faster: Language: static typing, support for object-oriented and functional programming styles, embedded documentation, short compile-debug-fix cycle or REPL, automatic memory management Platform: "batteries" included (text, regex, IO, threading, networking), thriving community, tons of open-source libs Tools: IDE, visual debugger, code-completion, code navigation, refactoring

    Read the article

  • Extending Database-as-a-Service to Provision Databases with Application Data

    - by Nilesh A
    Oracle Enterprise Manager 12c Database as a Service (DBaaS) empowers Self Service/SSA Users to rapidly spawn databases on demand in cloud. The configuration and structure of provisioned databases depends on respective service template selected by Self Service user while requesting for database. In EM12c, the DBaaS Self Service/SSA Administrator has the option of hosting various service templates in service catalog and based on underlying DBCA templates.Many times provisioned databases require production scale data either for UAT, testing or development purpose and managing DBCA templates with data can be unwieldy. So, we need to populate the database using post deployment script option and without any additional work for the SSA Users. The SSA Administrator can automate this task in few easy steps. For details on how to setup DBaaS Self Service Portal refer to the DBaaS CookbookIn this article, I will list steps required to enable EM 12c DBaaS to provision databases with application data in two distinct ways using: 1) Data pump 2) Transportable tablespaces (TTS). The steps listed below are just examples of how to extend EM 12c DBaaS and you can even have your own method plugged in part of post deployment script option. Using Data Pump to populate databases These are the steps to be followed to implement extending DBaaS using Data Pump methodolgy: Production DBA should run data pump export on the production database and make the dump file available to all the servers participating in the database zone [sample shown in Fig.1] -- Full exportexpdp FULL=y DUMPFILE=data_pump_dir:dpfull1%U.dmp, data_pump_dir:dpfull2%U.dmp PARALLEL=4 LOGFILE=data_pump_dir:dpexpfull.log JOB_NAME=dpexpfull Figure-1:  Full export of database using data pump Create a post deployment SQL script [sample shown in Fig. 2] and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned Normal 0 -- Full importdeclare    h1   NUMBER;begin-- Creating the directory object where source database dump is backed up.    execute immediate 'create directory DEST_LOC as''/scratch/nagrawal/OracleHomes/oradata/INITCHNG/datafile''';-- Running import    h1 := dbms_datapump.open (operation => 'IMPORT', job_mode => 'FULL', job_name => 'DB_IMPORT10');    dbms_datapump.set_parallel(handle => h1, degree => 1);    dbms_datapump.add_file(handle => h1, filename => 'IMP_GRIDDB_FULL.LOG', directory => 'DATA_PUMP_DIR', filetype => 3);    dbms_datapump.add_file(handle => h1, filename => 'EXP_GRIDDB_FULL_%U.DMP', directory => 'DEST_LOC', filetype => 1);    dbms_datapump.start_job(handle => h1);    dbms_datapump.detach(handle => h1);end;/ Figure-2: Importing using data pump pl/sql procedures Using DBCA, create a template for the production database – include all the init.ora parameters, tablespaces, datafiles & their sizes SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In “Additional Configuration Options” step of Customize “Create Database Deployment Procedure” flow, provide the name of the SQL script in the Custom Script section and lock the input (shown in Fig. 3). Continue saving the deployment procedure. Figure-3: Using Custom script option for calling Import SQL Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also  populate the data using the post deployment step. Using Transportable tablespaces to populate databases Copy of all user/application tablespaces will enable this method of populating databases. These are the required steps to extend DBaaS using transportable tablespaces: Production DBA needs to create a backup of tablespaces. Datafiles may need conversion [such as from Big Endian to Little Endian or vice versa] based on the platform of production and destination where DBaaS created the test database. Here is sample backup script shows how to find out if any conversion is required, describes the steps required to convert datafiles and backup tablespace. SSA Administrator should copy the database (tablespaces) backup datafiles and export dumps to the backup location accessible from the hosts participating in the database zone(s). Create a post deployment SQL script and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned. Here is sample post deployment SQL script using transportable tablespaces. Using DBCA, create a template for the production database – all the init.ora parameters should be included. NOTE: DO NOT choose to bring tablespace data into this template as they will be created SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In the “Additional Configuration Options” step of the flow, provide the name of the SQL script in the Custom Script section and lock the input. Continue saving the deployment procedure. Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also populate the data using the post deployment step. More Information: Database-as-a-Service on Exadata Cloud Podcast on Database as a Service using Oracle Enterprise Manager 12c Oracle Enterprise Manager 12c Installation and Administration guide, Cloud Administration guide DBaaS Cookbook Screenwatch: Private Database Cloud: Set Up the Cloud Self-Service Portal Screenwatch: Private Database Cloud: Use the Cloud Self-Service Portal Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Programmers and tech email: Do you actually read all of them?

    - by AdityaGameProgrammer
    Email Alerts, Blog /Forum updates, discussion subscriptions general programming/technology update emails that we often subscribe to.Do you actually read them ? or go direct to the source when you find time. often we might the mail of programmers filled with loads of unread subscription mail from technology they previously were following or worked on or things they wish to follow .some or a majority of these mail just keep on piling up . i personally have few updates that i wish i read but constantly avoid and keep of for latter and finally delete them in effort keep the in box clean. few questions come to mind regarding this Do you keep such mail in separate accounts? Do you read all the mail you have subscribed to? Do you ever unsubscribe to any such email if you aren't reading them? How much do you really value these email. Lastly do you keep your in box clean ? wish to deal with this in a better way.

    Read the article

  • Recap - SQL Saturday 151 in Orlando

    - by KKline
    It's always a feel-good experience for me to return to SQL Saturday in Orlando, the place where SQL Saturdays were started by Andy Warren ( Twitter | Blog ). On this trip, I delivered a full-day, pre-conference seminar on Troubleshooting and Performance Tuning SQL Server. I also delivered a session on SQL Server Internals and Architecture to a totally packed house. For those of you who emailed me directly, here's the link for the special SQL Sentry offer . I got to attend the extended events session...(read more)

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >