Search Results

Search found 30279 results on 1212 pages for 'database drift'.

Page 1026/1212 | < Previous Page | 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033  | Next Page >

  • On the art of self-promotion

    - by Tony Davis
    I attended Brent Ozar’s Building the Fastest SQL Servers session at Tech Ed last week, and found myself engulfed in a ‘perfect storm’ of excellent technical and presentational skills coupled with an astute awareness of the value of promoting one’s work. I spend a lot of time at such events talking to developers and DBAs about the value of blogging and writing articles, and my impression is that some could benefit from a touch less modesty and a little more self-promotion. I sense a reticence in many would-be writers. Is what I have to say important enough? Haven’t far more qualified and established commentators, MVPs and so on, already said it? While it’s a good idea to pick reasonably fresh and interesting topics, it’s more important not to let such fears lead to writer’s block. In the eyes of any future employer, your published writing is an extension of your resume. They will not care that a certain MVP knows how to solve problem x, but they will be very interested to see that you have tackled that same problem, and solved it in your own way, and described the process in your own voice. In your current job, your writing is one of the ways you can express to your peers, and to the organization as a whole, the value of what you contribute. Many Developers and DBAs seem to rely on the idea that their work will speak for itself, and that their skill shines out from it. Unfortunately, this isn’t always true. Many Development DBAs, for example, will be painfully aware of the massive effort involved in tuning and adding resilience to rapidly developed applications. However, others in the organization who are unaware of what’s involved in getting an application that is ‘done’ ready for production may dismiss such efforts as fussiness or conservatism. At the dark end of the development cycle, chickens come home to roost, but their droppings tend to land on those trying to clear up the mess. My advice is this: next time you fix a bug or improve the resilience or performance of a database or application, make sure that you use team meetings, informal discussions and so on to ensure that people understand what the problem was and what you had to do to fix it. Use your blog to describe, generally, the process you adopted, the resources you used and the insights that came from your work. Encourage your colleagues to do the same. By spreading the art of self-promotion to everyone involved in an IT project, we get a better idea of the extent of the work and the value of the contribution of all the team members. As always, we’d love to hear what you think. This very week, Simple-talk launches its new blogging platform. If any of this has moved you to ‘throw your hat into the ring’, drop us a mail at [email protected]. Cheers, Tony.

    Read the article

  • SQL SERVER – Backup SQL databases to Box or SkyDrive

    - by Pinal Dave
    To ensure your SQL Server or Azure databases remain safe, you should backup your databases periodically. And it is important to store the backups in a reliable location. Microsoft SkyDrive currently offers 7GB free, Box offers 5GB free – both are reliable and it is simple to send your backups there. SQLBackupAndFTP in it’s latest version 9 added the option to backup to SkyDrive and Box ( in addition to local/network folder, NAS drive, FTP, Dropbox, Google Drive and Amazon S3). Just select the databases that you’d like to backup and select to store the backups in SkyDrive or Box. Below I will show you how to do it in details Select databases to backup First connect to your SQL Server or Azure Sql Database. Then select the databases you’d like to backup. Connect to SkyDrive or Box cloud If you have a free version of SQLBackupAndFTP Box destination is included, but SkyDrive destination will be disabled as it is available in the Standard version or above. Click “Try now” to get 30 days trial on all options On the “SkyDrive Settings” form you’ll need to authorize SQLBackupAndFTP to access your SkyDrive. Click “Authorize…” to open SkyDrive authorization page in your browser, sign in your to SkyDrive account and click at “Allow” . On the next page you will see the field with authorization code. Copy it to the clipboard. Box operation is just the same. After that return to SQLBackupAndFTP, paste the authorization code and click “OK” . After you are authorized, you can enter the path to a backup folder. SQLBackupAndFTP will create the folder if it does not exist. That’s all what has to be done to backup to SkyDrive or Box cloud.  You can now click on “Run Now” button to test this job. Conclusion Whatever is your preference for storing SQL backups, it is easy with SQLBackupAndFTP. Note that at the time of this writing they are running a very rare promotion on volume licenses: 5–9 licenses: 20% off 10–19 licenses: 35% off more than 20 licenses: 50% off Please let me know your favorite options for storing the backups. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Oracle Excellence Awards 2012

    - by A&C Redaktion
    Spezialisierte Partner können sich ab sofort bis 29. Juni als „Specialized Partner of the Year“ bewerben. Damit honoriert Oracle diejenigen OPN Partner in EMEA, die sich mithilfe der Spezialisierung besonders ausgezeichnet haben, sei es durch einen hohen Mehrwert für deren Endkunden oder innovativen Lösungen und Services. Voraussetzung für eine Bewerbung ist mindestens eine abgeschlossene Spezialisierung in diesen sieben Kategorien: Specialized Partner of the Year: Database Specialized Partner of the Year: Applications Specialized Partner of the Year: Middleware Specialized Partner of the Year: Industry Specialized Partner of the Year: Oracle Accelerate Specialized Partner of the Year: Servers and Storage Systems Specialized Partner of the Year: Oracle on Oracle Der Gewinner eines “Specialized Partner of the Year” EMEA Awards erhält 5.000 US-Dollar MDF und vielfältige Möglichkeiten, sich in Szene zu setzen. Wie auch im letzten Jahr wird der Award wieder auf der Oracle OpenWorld in San Francisco überreicht. Interviews, Videos, Werbung, Berichte im Oracle Magazine und ein kostenloser Konferenzpass sind natürlich inklusive. Wie immer, gilt die Bewerbung für den EMEA-Award gleichzeitig als Bewerbung für den deutschen Partner-Award 2012, der auf dem Oracle Partner Day (im Herbst nach der OpenWorld) verliehen wird. Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Die Nominierung Ihres erfolgreichen Projektes muss diesmal hier auf Englisch eingereicht werden, da eine internationale Jury entscheidet. Beschreiben Sie Ihr Projekt so ausführlich wie möglich, damit sich die Juroren ein gutes Bild Ihrer Leistungen und Services machen können. Wenn Sie hierbei Unterstützung benötigen, fragen Sie einfach Ihren Channel Manager. Denn: Je aussagekräftiger Ihre Unterlagen sind, desto höher ist Ihre Chance zu gewinnen! Alle Informationen zu den diesjährigen Awards finden Sie auf der Oracle Excellence Awards Website. Wir drücken Ihnen die Daumen!

    Read the article

  • Oracle Excellence Awards 2012

    - by A&C Redaktion
    Spezialisierte Partner können sich ab sofort bis 29. Juni als „Specialized Partner of the Year“ bewerben. Damit honoriert Oracle diejenigen OPN Partner in EMEA, die sich mithilfe der Spezialisierung besonders ausgezeichnet haben, sei es durch einen hohen Mehrwert für deren Endkunden oder innovativen Lösungen und Services. Voraussetzung für eine Bewerbung ist mindestens eine abgeschlossene Spezialisierung in diesen sieben Kategorien: Specialized Partner of the Year: Database Specialized Partner of the Year: Applications Specialized Partner of the Year: Middleware Specialized Partner of the Year: Industry Specialized Partner of the Year: Oracle Accelerate Specialized Partner of the Year: Servers and Storage Systems Specialized Partner of the Year: Oracle on Oracle Der Gewinner eines “Specialized Partner of the Year” EMEA Awards erhält 5.000 US-Dollar MDF und vielfältige Möglichkeiten, sich in Szene zu setzen. Wie auch im letzten Jahr wird der Award wieder auf der Oracle OpenWorld in San Francisco überreicht. Interviews, Videos, Werbung, Berichte im Oracle Magazine und ein kostenloser Konferenzpass sind natürlich inklusive. Wie immer, gilt die Bewerbung für den EMEA-Award gleichzeitig als Bewerbung für den deutschen Partner-Award 2012, der auf dem Oracle Partner Day (im Herbst nach der OpenWorld) verliehen wird. Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Die Nominierung Ihres erfolgreichen Projektes muss diesmal hier auf Englisch eingereicht werden, da eine internationale Jury entscheidet. Beschreiben Sie Ihr Projekt so ausführlich wie möglich, damit sich die Juroren ein gutes Bild Ihrer Leistungen und Services machen können. Wenn Sie hierbei Unterstützung benötigen, fragen Sie einfach Ihren Channel Manager. Denn: Je aussagekräftiger Ihre Unterlagen sind, desto höher ist Ihre Chance zu gewinnen! Alle Informationen zu den diesjährigen Awards finden Sie auf der Oracle Excellence Awards Website. Wir drücken Ihnen die Daumen!

    Read the article

  • EPM Planning (Hyperion) V11.1.2 Implementation Hands-On Boot-camp

    - by Mike.Hallett(at)Oracle-BI&EPM
    5-Day Training for Partners: 29th October - 2nd November 2012, London (UK): REGISTER Here This FREE for Partners 5-day workshop is designed to provide implementation instruction on Oracle Hyperion EPM Planning.  This boot-camp is intended for prospective implementers of the Planning and Budgeting functionality of Oracle EPM or implementers that are currently familiar with the basics of EPM Planning and looking to strengthen their base of knowledge in the product. The class begins with an overview of Essbase, the foundation of Hyperion Planning. It provides a general overview of Planning and Planning terms, the architecture of all the Planning components, and how they are commonly used. The course goes over all the steps to create an application from scratch. This involves some preparation work outside of Planning and leads to developing the application in both the Planning Windows and Web clients. Participants will modify existing dimensions and build out the hierarchies using the Web client. Topics Covered The boot-camp shows developers how to build out dimensions using Classic Planning and by using EPMA. It covers the mechanics and cover strategies for automating the build process such as interface tables. It reviews data loads using Load Rules to load the Planning database. The course focuses on tasks that end-users must perform during the planning cycle. It walks students through creating and modifying forms, working with forms to enter data, adding annotations, and the rest of the form features such as running business rules and managing task lists. It covers how to use the forms in the Smart View client and finishes up the end-user perspective by going through Workflow Management and the process of submitting a plan for review. The final section of the course covers Security and other administration topics such as automation and deployment. Prerequisites Ideal participants are Oracle partners (SIs and resellers) with a background in business information systems and a clientele of customers with ongoing or prospective EPM initiatives. Alternatively, partners with the background described above and an interest in evolving their practice to a similar profile are suitable participants. Further online OPN guided learning path information and webinars are available at: Oracle Hyperion Planning 11 Essentials. Please note that attendees are required to bring a laptop. View here laptop requirements and detailed agenda. ·       REGISTER Here : acceptance is subject to availability and your place will be confirmed within two weeks  ( and for help see the Partner Registration Guide ). Training Location: Oracle Corporation UK Ltd Columbus Room Customer Visit Center 1 South Place London EC2M 2RB Training Dates: 29th October - 2nd November  9:30 am – 5:00 pm BST For more information please contact [email protected].

    Read the article

  • MySQL Connect Keynotes and Presentations Available Online

    - by Bertrand Matthelié
    72 1024x768 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Following the tremendous success of MySQL Connect, you can now watch some of the keynotes online: The State of the Dolphin – by Oracle Chief Corporate Architect Edward Screven and MySQL Vice President of Engineering Tomas Ulin 72 1024x768 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Cambria","serif";} MySQL Perspectives – featuring power users of MySQL who share their experiences and perspectives: Jeremy Cole, DBA Team Manager, Twitter Daniel Austin, Chief Architect, PayPal Ash Kanagat, IT Director; and Shivinder Singh, Database Architect, Verizon Wireless You can also access slides from a number of MySQL Connect presentations in the Content Catalog. Missing ones will be added shortly (provided the speakers consented to it). Enjoy!

    Read the article

  • RPG Monster-Area, Spawn, Loot table Design

    - by daemonfire300
    I currently struggle with creating the database structure for my RPG. I got so far: tables: area (id) monster (id, area.id, monster.id, hp, attack, defense, name) item (id, some other values) loot (id = monster.id, item = item.id, chance) spawn (id = area.id, monster = monster.id, count) It is a browser-based game like e.g. Castle Age. The player can move from area to area. If a player enters an area the system spawns, based on the area.id and using the spawn table data, new monsters into the monster table. If a player kills a monster, the system picks the monster.id looks up the items via the the loot table and adds those items to the player's inventory. First, is this smart? Second, I need some kind of "monster_instance"-table and "area_instance"-table, since each player enters his very own "area" and does damage to his very own "monsters". Another approach would be adding the / a player.id to the monster table, so each monster spawned, has it's own "player", but I still need to assign them to an area, and I think this would overload the monster table if I put in the player.id and the area.id into the monster table. What are your thoughts? Temporary Solution monster (id, attackDamage, defense, hp, exp, etc.) monster_instance (id, player.id, area_instance.id, hp, attackDamage, defense, monster.id, etc.) area (id, name, area.id access, restriction) area_instance (id, area.id, last_visited) spawn (id, area.id, monster.id) loot (id, monster.id, chance, amount, ?area.id?) An example system-flow would be: Player enters area 1: system creates area_instance of type area.id = 1 and sets player.location to area.id = 1 If Player wants to battle monsters in the current area: system fetches all spawn entries matching area.id == player.location and creates a new monster_instance for each spawn by fetching the according monster-base data from table monster. If a monster is fetched more than once it may be cached. If Player actually attacks a monster: system updates the according monster_instance, if monster dies the instance if removed after creating the loot If Player leaves the area: area_instance.last_visited is set to NOW(), if player doesn't return to data area within a certain amount of time area_instance including all its monster_instances are deleted.

    Read the article

  • Error while removing the new kernel 2.6.37

    - by Tarek
    Hi! I tried to install the new kernel but something went wrong and I'm trying to remove it now. The error massege is: mhd@Tarek-Laptop:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: linux-image-2.6.37-020637-generic 0 upgraded, 0 newly installed, 1 to remove and 9 not upgraded. 1 not fully installed or removed. After this operation, 111MB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 188780 files and directories currently installed.) Removing linux-image-2.6.37-020637-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 2.6.37-020637-generic /boot/vmlinuz-2.6.37-020637-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 2.6.37-020637-generic /boot/vmlinuz-2.6.37-020637-generic /etc/default/grub: 33: Syntax error: EOF in backquote substitution run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 2 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-2.6.37-020637-generic.postrm line 328. dpkg: error processing linux-image-2.6.37-020637-generic (--remove): subprocess installed post-removal script returned error exit status 1 Errors were encountered while processing: linux-image-2.6.37-020637-generic E: Sub-process /usr/bin/dpkg returned an error code (1) The previous unsloved error is on this bug. This is my grub configuration file: # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. GRUB_DEFAULT=0 #GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` RUB_CMDLINE_LINUX_DEFAULT="quiet splash nomodeset video=uvesafb:mode_option=1024x768-24,mtrr=3,scroll=ywrap" video=uvesafb:mode_option=>>1024x768-24<<,mtrr=3,scroll=ywrap" GRUB_CMDLINE_LINUX=" vga=792 splash" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' GRUB_GFXMODE=1024x768-24 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_LINUX_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" thank you for answering.

    Read the article

  • Pay in the future should make you think in the present

    - by BuckWoody
    Distributed Computing - and more importantly “-as-a-Service” models of computing have a different cost model. This is something that sounds obvious on the surface but it’s often forgotten during the design and coding phase of a project. In on-premises computing, we’re used to purchasing a server and all of the hardware infrastructure and software licenses needed not only for one project, but several. This is an up-front or “sunk” cost that we consume by running code the organization needs to perform its function. Using a direct connection over wires you’ve already paid for, we don’t often have to think about bandwidth, hits on the data store or the amount of compute we use - we just know more is better. In a pay-as-you-go model, however, each of these architecture decisions has a potential cost impact. The amount of data you store, the number of times you access it, and the amount you send back all come with a charge. The offset is that you don’t buy anything at all up-front, so that sunk cost is freed up. And financial professionals know that money now is worth more than money later. Saving that up-front cost allows you to invest it in other things. It’s not just that you’re using things that now cost money - it’s that the design itself in distributed computing has a cost impact. That can be a really good thing, such as when you dynamically add capacity for paying customers. If you can tie back the cost of a series of clicks to what a user will pay to do so, you can set a profit margin that is easy to track. Here’s a case in point: Assume you are using a large instance in Windows Azure to compute some data that you retrieve from a SQL Azure database. If you don’t monitor the path of the application, you may not know what you are really using. Since you’re paying by the size of the instance, it’s best to maximize it all the time. Recently I evaluated just this situation, and found that downsizing the instance and adding another one where needed, adding a caching function to the application, moving part of the data into Windows Azure tables not only increased the speed of the application, but reduced the cost and more closely tied the cost to the profit. The key is this: from the very outset - the design - make sure you include metrics to measure for the cost/performance (sometimes these are the same) for your application. Windows Azure opens up awesome new ways of doing things, so make sure you study distributed systems architecture before you try and force in the application design you have on premises into your new application structure.

    Read the article

  • Push-Based Events in a Services Oriented Architecture

    - by Colin Morelli
    I have come to a point, in building a services oriented architecture (on top of Thrift), that I need to expose events and allow listeners. My initial thought was, "create an EventService" to handle publishing and subscribing to events. That EventService can use whatever implementation it desires to actually distribute the events. My client automatically round-robins service requests to available service hosts which are determined using Zookeeper-based service discovery. So, I'd probably use JMS inside of EventService mainly for the purpose of persisting messages (in the event that a service host for EventService goes down before it can distribute the message to all of the available listeners). When I started considering this, I began looking into the differences between Queues and Topics. Topics unfortunately won't work for me, because (at least for now), all listeners must receive the message (even if they were down at the time the event was pushed, or hadn't made a subscription yet because they haven't completed startup (during deployment, for example) - messages should be queued until the service is available). However, I don't want EventService to be responsible for handling all of the events. I don't think it should have the code to react to events inside of it. Each of the services should do what it needs with a given event. This would indicate that each service would need a JMS connection, which questions the value of having EventService at all (as the services could individually publish and subscribe to JMS directly). However, it also couples all of the services to JMS (when I'd rather that there be a single service that's responsible for determining how to distribute events). What I had thought was to publish an event to EventService, which pulls a configuration of listeners from some configuration source (database, flat file, irrelevant for now). It replicates the message and pushes each one back into a queue with information specific to that listener (so, if there are 3 listeners, 1 event would become 3 events in JMS). Then, another thread in EventService (which is replicated, running on multiple hots) would be pulling from the queue, attempting to make the service call to the "listener", and returning the message to the queue (if the service is down), or discarding the message (if the listener completed successfully). tl;dr If I have an EventService that is responsible for receiving events and delegating service calls to "event listeners," (which are really just endpoints on other services), how should it know how to craft the service call? Should I create a generic "Event" object that is shared among all services? Then, the EventService can just construct this object and pass it to the service call. Or is there a better answer to this problem entirely?

    Read the article

  • JDK bug migration milestone: JIRA now the system of record

    - by darcy
    I'm pleased to announce the OpenJDK bug database migration project has reached a significant milestone: the JDK has switched from the legacy Sun "bugtraq" system to a new internal JIRA instance as the system of record for our bug tracking. This completes the initial phase of the previously described plan of getting OpenJDK onto an externally visible and writable bug tracker. The identities contained in the current system include recognized OpenJDK contributors. The bug migration effort to date has been sizable in multiple dimensions. There are around 140,000 distinct issues imported into the JDK project of the JIRA instance, nearly 165,000 if backport issues to track multiple-release information are included. Separately, the Code Tools OpenJDK project has its own JIRA project populated with several thousands existing bugs. Once the OpenJDK JIRA instance is externalized, approved OpenJDK projects will be able to request the creation of a JIRA project for issue tracking. There are many differences in the schema used to model bugs between the legacy bug system and the schema for the new JIRA projects. We've favored simplifications to the existing system where possible and, after much discussion, we've settled on five main states for the OpenJDK JIRA projects: New Open In progress Resolved Closed The Open and In-progress states can have a substate Understanding field set to track whether the issues has its "Cause Known" or "Fix understood". In the closed state, a Verification field can indicate whether a fix has been verified, unverified, or if the fix has failed. At the moment, there will be very little externally visible difference between JIRA for OpenJDK and the legacy system it replaces. One difference is that bug numbers for newly filed issues in the JIRA JDK project will be 8000000 and above. If you are working with JDK Hg repositories, update any local copies of jcheck to the latest version which recognizes this expanded bug range. (The bug numbers of existing issues have been preserved on the import into JIRA). Relatively soon, we plan for the pages published on bugs.sun.com to be generated from information in JIRA rather than in the legacy system. When this occurs, there will be some differences in the page display and the terminology used will be revised to reflect JIRA usage, such as referring to the "component/subcomponent" of an issue rather than its "category". The exact timing of this transition will be announced when it is known. We don't currently have a firm timeline for externalization of the JIRA system. Updates will be provided as they become available. However, that is unlikely to happen before JavaOne next week!

    Read the article

  • Convert VARCHAR() columns to NVARCHAR()

    - by ChrisD
    We recently underwent an upgrade that required us to change our database columns from varchar to NVarchar, to support unicode characters. Digging through the internet, I found a base script which I modified to handle reserved word table names, and maintain the NULL/NotNull constraint of the columns.   I Ran this script use NWOperationalContent – Your Catalog Name here GO SELECT 'ALTER TABLE ' + isnull(schema_name(syo.id), 'dbo') + '.[' +  syo.name +'] '     + ' ALTER COLUMN [' + syc.name + '] NVARCHAR(' + case syc.length when -1 then 'MAX'         ELSE convert(nvarchar(10),syc.length) end + ') '+         case  syc.isnullable when 1 then ' NULL' ELSE ' NOT NULL' END +';'    FROM sysobjects syo    JOIN syscolumns syc ON      syc.id = syo.id    JOIN systypes syt ON      syt.xtype = syc.xtype    WHERE      syt.name = 'varchar'     and syo.xtype='U'   which produced a series of ALTER statements which I could then execute the tables.  In some cases I had to drop indexes, alter the tables, and re-create the indexes.  There might have been a better way to do that, but manually dropping them got the job done.   use NWMerchandisingContent GO ALTER TABLE Locale Drop Constraint PK_Locale ALTER TABLE Country DROP CONSTRAINT PK_Country GO ALTER TABLE dbo.[Campaign]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[BundleLocalization]  ALTER COLUMN [Locale] NVARCHAR(8)  NOT NULL; ALTER TABLE dbo.[BundleLocalization]  ALTER COLUMN [UnitOfmeasure] NVARCHAR(200)  NULL; ALTER TABLE dbo.[BundleLocalization]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[BundleComponentLocalization]  ALTER COLUMN [Locale] NVARCHAR(8)  NOT NULL; ALTER TABLE dbo.[BundleComponentLocalization]  ALTER COLUMN [Imperative] NVARCHAR(MAX)  NULL; ALTER TABLE dbo.[BundleComponentLocalization]  ALTER COLUMN [Instructions] NVARCHAR(MAX)  NULL; ALTER TABLE dbo.[BundleComponentLocalization]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[BundleComponent]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[Bundle]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[Banner]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[Video]  ALTER COLUMN [Link] NVARCHAR(512)  NOT NULL; ALTER TABLE dbo.[Video]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[ProductUsage]  ALTER COLUMN [VideoLink] NVARCHAR(512)  NOT NULL; ALTER TABLE dbo.[ProductUsage]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[Thumbnail]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[SkuLocalization]  ALTER COLUMN [Locale] NVARCHAR(8)  NOT NULL; ALTER TABLE dbo.[SkuLocalization]  ALTER COLUMN [UnitOfMeasure] NVARCHAR(150)  NOT NULL; ALTER TABLE dbo.[SkuLocalization]  ALTER COLUMN [SwatchColor] NVARCHAR(50)  NOT NULL; etc.. GO ALTER TABLE Locale ADD CONSTRAINT PK_Locale PRIMARY KEY (LocaleId) ALTER TABLE Country ADD CONSTRAINT PK_Country PRIMARY KEY (CountryId) Note that this alter is non-destructive to the data.   Hope this helps.

    Read the article

  • More Maintenance Plan Weirdness

    - by AjarnMark
    I’m not a big fan of the built-in Maintenance Plan functionality in SQL Server.  I like the interface in SQL 2005 better than 2000 (it looks more like building an SSIS package) but it’s still a bit of a black box.  You don’t really know what commands are being run based on the selections you have made, and you can easily make some unwise choices without realizing it, such as shrinking your database on a regular basis.  I really prefer to know exactly what commands and with which options are being run on my servers. Recently I had another very strange thing happen with a Maintenance Plan, this time in SQL 2005, SP3.  I inherited this server and have done a bit of cleanup on it, but had not yet gotten around to replacing the Maintenance Plans with all my own scripts.  However, one of the maintenance plans which was just responsible for doing LOG backups was running more frequently than that system needed, and I thought I would just tweak the schedule a bit.  So I opened the Maintenance Plan and edited the properties of the Subplan, setting a new schedule, saved it and figured all was good to go.  But the next execution of the Scheduled Job that triggers the Maintenance Plan code failed with an error about the Owner of the job.  Specifically the error was, “Unable to determine if the owner (OldDomain\OldDBAUserID) of job MaintenancePlanName.Subplan has server access (reason: Could not obtain information about Windows NT group/user 'OldDomain\OldDBAUserID’..”  I was really confused because I had previously updated all of the jobs to have current accounts as the owners.  At first I thought it was just a fluke, but it happened on the next scheduled cycle so I investigated further and sure enough, that job had the old DBA’s account listed as the owner.  I fixed it and the job successfully ran to completion. Now, I don’t really like mysteries like that, so I did some more testing and verified that, sure enough, just editing the Subplan schedule and saving the Maintenance Job caused the Scheduled Job to be recreated with the old credentials.  I don’t know where it is getting those credentials, but I can only assume that it is the same as the original creator of the Maintenance Plan, and for some reason it insists on using that ID for the job owner.  I looked through the options in SSMA and could not find anything would let me easily set the value that I wanted it to use.  I suspect that if I did something like executing sp_changeobjectowner against the Maintenance Plan that it would use that new ID instead.  I’m sure that there is good reason that it works this way, but rather than mess around with it much more, I’m just going to spend my time rolling out my replacement scripts instead. Chalk this little hidden oddity up as yet one more reason I’m not a fan of Maintenance Plans.

    Read the article

  • Single Responsibility Principle Implementation

    - by Mike S
    In my spare time, I've been designing a CMS in order to learn more about actual software design and architecture, etc. Going through the SOLID principles, I already notice that ideas like "MVC", "DRY", and "KISS", pretty much fall right into place. That said, I'm still having problems deciding if one of two implementations is the best choice when it comes to the Single Responsibility Principle. Implementation #1: class User getName getPassword getEmail // etc... class UserManager create read update delete class Session start stop class Login main class Logout main class Register main The idea behind this implementation is that all user-based actions are separated out into different classes (creating a possible case of the aptly-named Ravioli Code), but following the SRP to a "tee", almost literally. But then I thought that it was a bit much, and came up with this next implementation class UserView extends View getLogin //Returns the html for the login screen getShortLogin //Returns the html for an inline login bar getLogout //Returns the html for a logout button getRegister //Returns the html for a register page // etc... as needed class UserModel extends DataModel implements IDataModel // Implements no new methods yet, outside of the interface methods // Haven't figured out anything special to go here at the moment // All CRUD operations are handled by DataModel // through methods implemented by the interface class UserControl extends Control implements IControl login logout register startSession stopSession class User extends DataObject getName getPassword getEmail // etc... This is obviously still very organized, and still very "single responsibility". The User class is a data object that I can manipulate data on and then pass to the UserModel to save it to the database. All the user data rendering (what the user will see) is handled by UserView and it's methods, and all the user actions are in one space in UserControl (plus some automated stuff required by the CMS to keep a user logged in or to ensure that they stay out.) I personally can't think of anything wrong with this implementation either. In my personal feelings I feel that both are effectively correct, but I can't decide which one would be easier to maintain and extend as life goes on (despite leaning towards Implementation #1.) So what about you guys? What are your opinions on this? Which one is better? What basics (or otherwise, nuances) of that principle have I missed in either design?

    Read the article

  • Impressions from VMworld - Clearing up Misconceptions

    - by Monica Kumar
    Gorgeous sunny weather…none of the usual summer fog…the Oracle Virtualization team has been busy at VMworld in San Francisco this week. From the time exhibits opened on Sunday, our booth staff was fully engaged with visitors. It was great to meet with customers and prospects, and there were many…most with promises to meet again in October at Oracle OpenWorld 2012. Interests and questions ran the gamut - from implementation details to consolidating applications to how does Oracle VM enable rapid application deployment to Oracle support and licensing. All good stuff! Some inquiries are poignant and really help us get at the customer pain points. Some are just based on misconceptions. We’d like to address a couple of common misconceptions that we heard: 1) Rapid deployment of enterprise applications is great but I don’t do this all the time. So why bother? While production applications don’t get updated or upgraded as often, development and QA staging environments are much more dynamic. Also, in today’s Cloud based computing environments, end users expect an entire solution, along with the virtual machine, to be provisioned instantly, on-demand, as and when they need to scale. Whether it’s adding a new feature to meet customer demands or updating applications to meet business/service compliance, these environments undergo change frequently. The ability to rapidly stand up an entire application stack with all the components such as database tier, mid-tier, OS, and applications tightly integrated, can offer significant value. Hand patching, installation of the OS, application and configurations to ensure the entire stack works well together can take days and weeks. Oracle VM Templates provide a much faster path to standing up a development, QA or production stack in a matter of hours or minutes. I see lots of eyes light up as we get to this point of the conversation. 2) Oracle Software licensing on VMware vSphere In the world of multi-vendor IT stacks, understanding license boundaries and terms and conditions for each product in the stack can be challenging.  Oracle’s licensing, though, is straightforward.  Oracle software is licensed per physical processor in the server or cluster where the Oracle software is installed and/or running.  The use of third party virtualization technologies such as VMware is not allowed as a means to change the way Oracle software is licensed.  Exceptions are spelled out in the licensing document labeled “Hard Partitioning". Here are some fun pictures! Visitors to our booth told us they loved the Oracle SUV courtesy shuttles that are helping attendees get to/from hotels. Also spotted were several taxicabs sporting an Oracle banner! Stay tuned for more highlights across desktop and server virtualization as we wrap up our participation at VMworld.

    Read the article

  • Server-infrastructure recommendations

    - by Tim van Elsloo
    Here's the thing: I need a cheap, fast, reliable infrastructure that can dynamically scale (like Amazon S3: cloud-storage). I'm thinking of 3 different type of 'servers'. Application-server Should be able to run CentOS (or another light Linux-distr.) Should be able to run Apache Should be able to run PHP Should be able to run GD (so it does rely on it's cpu). Should be extremely reliable and fast. Database-server Should be able to run MySQL Should be able to... well, do nothing else :P. Should be extremely reliable and fast. Storage-server Should be able to run some kind of file-transfer-deamon (like FTP, CouchDB, etc.) Should be able to do nothing else. Should be extremely reliable and fast. So technically, by transferring all static data to 2 different servers/services, the application-server can totally focus on the webpages. My questions: What services do you recommend? Which is cheaper, faster and more reliable: using my own server, or using some cloud-storage/cloud-computing-service (like Amazon S3, CloudFiles, etc.)? How can I prevent bandwidth abuse (such as dos-attacks causing the bill to be extremely high)? What's the difference between "including CDN" and "excluding CDN"? It seems the price doesn't differ at CloudFiles? Do you have to pay "including CDN" + "excluding CDN" when you decide to enable the delivery-network? Or have you only got to pay "including CDN"? Should I use my own nameserver too or can I use my domain-hoster's nameservers? What are the minimum software specifications of a nameserver. Can I write some software myself? Does anyone have a good protocol-description? I hope you can answer my questions. Answers I shouldn't write my own nameserver-software. Instead, I should use something like bind. (http://osspro.com/2010/05/04/linux-create-your-own-domain-name-server-dns/).

    Read the article

  • Size doesn't matter

    - by ssoolsma
    Whenever I start a new project I *always* break up my code in different projects. Also known as n-tier solution. The scale of  the project doesn't matter, but make sure that each project is responsible for himself (or herself if you prefer). I make sure that i ....At least thought about how the project should work on the toilet or in a project team meeting.Have a solution directory and create my projects within. I like to name my project (and it's folders by the namespaces). For instance: When i'm creating a piece of (web)software called: ChuckNorris, i always include the software name in my projects. Start off with designing the DataAccess project. I name it: ChuckNorris.DataAccess which lets me easily identify the project incase the project scales alot.Build the classes which represent the database structure. Don't stop working on a class untill it's finished for now. Also, don't over-do the methods. Build stuff only when it's needed, and not think: "Hm, that would be cool to have". Cause most of the time you end up with unused code, and we don't want that.Build a unittest project and make sure you create the folder inside the project that it's testing. So, create the ChuckNorris.DataAccess.UnitTest project inside the folder of the dataaccess project. I would suggest using the nUnit testframework.Incase you though, hm i skip unittest: Don't! Just build it - it will safe you alot of time later onNow, read 5 again. Build that bloody unittest. Don't skip. (i cant emphasize this enough)Now, every class in the dataaccess project is responsible for itself. They don't rely on each other. This is where we use the BusinessLogic project for. Start creating the ChuckNorris.BusinessLogic project. (not inside the data-access project ofcourse, but withing the ChuckNorris folder.Combine stuff from data-access. This usual involves alot of copying the data-access classes and feels silly at first. (we'll get to that later on)Now you come up to a point of creating a service project. You might not always see why to use it, but see it as a way to expose your businesslogic to any application (including your own). Sometimes i use it as a so-called "Factory". Every call goes through this factory, so that's the only thing i'm exposing to any program, and make sure that those methods are the only ones that I allow you to invoke.Build any UI (website, phoneapp, forms application, silverlight, wpf or whatever) and reference it to you service project. Fall in love (cough) with this approach.It's possible that it doesn't seem to make much sense, and very incomplete. Well, that last part is correct. Next post will go in to detail of setting up your Data-Access project and use the entity framework.

    Read the article

  • A starting point for Use Cases and User Stories

    - by Mike Benkovich
    Originally posted on: http://geekswithblogs.net/benko/archive/2013/07/23/a-starting-point-for-use-cases-and-user-stories.aspxSoftware is a challenging business and is rife with opportunities to go wrong. Over the years a number of methodologies have evolved to help make sure that things go right. In an effort to contribute to this I’ve created a list of user stories that I think should be included and sometimes are just assumed. Note this is a work in progress, so I’m looking for your feedback. I’m curious what you would add or change in my list. · As a DBA I am working with a Normalized data model that reflects an agreed upon logical model for the system · As a DBA I am using consistent names for my fields which match the naming standards of my organization · As a DBA my model supports simple CRUD operations against all the entities · As an Application Architect the UI has been validated against the Business requirements and a complete set of user story’s have been created · As an Application Architect the database model has been validated against the UI · As an Application Architect we have a logical business model that describes all the known and/or expected usage of the system during the software’s expected lifecycle · As an Application Architect we have a Deployment diagram that describes how the application components will be deployed · As an Application Architect we have a navigation diagram that describes the typical application flow · As an Application Architect we have identified points of interaction which describes how the UI interacts with the services and the data storage · As an Application Architect we have identified external systems which may now or in the future use the data of this application and have adapted the logical model to include these interactions · As an Application Architect we have identified existing systems and tools that can be extended and/or reused to help this application achieve it’s business goals · As a Project Manager all team members understand the goals of each release and iteration as they are planned · As a Project Manager all team members understand their role and the roles of others · As a Project Manager we have support of the business to do the right thing even if it is not the expedient thing · As a Test/QA Analyst we have created a simulation environment for testing the system which does not use sensitive data and accurately reflects the scenarios of all the data that will be supported by the system · As a Test/QA Analyst we have identified the matrix of supported clients used to access the system including the likely browsers, mobile devices and other interfaces to work with the application · As a Test/QA Analyst we have created exit criteria for each user story that match the requirements of the business story that was used to create them · As a Test/QA Analyst we have access to a Test environment that is isolated from production and staging environments · As a Test/QA Analyst there we have a way to reset the environment so we can rerun tests when a new version of the software becomes available · As a Test/QA Analyst I am able to automate portions of the test process Thoughts? -mike

    Read the article

  • VS2010 Launch Presentations

    Last week I was in Vegas to present at the DevConnections / VS2010 Launch event.  The show was well-attended and everybody I spoke to agreed it was educational and enjoyable.  My three talks were all on Wednesday, 14 April 2010, including one at 8am for which I was impressed to see a large turnout in attendance.   Pragmatic ASP.NET Tips, Tricks, and Tools My first session was on tips, tricks, and tools for ASP.NET developers.  This is a talk Ive given in past years, but which I refine every time.  I usually like to have a full session to devote to tools, and a separate talk just for Tips and Tricks, but for this show I was only given the one 75-minute slot, so I had to cut some materials to make things fit.  The talk went well, all the demos work, and the attendees seemed to enjoy it, and I like giving it, so hopefully I can continue to present on this topic in future DevConnections shows. Download the ASP.NET Tips, Tricks, and Tools slides and demos.   Whats New in ASP.NET MVC 2 My second talk of the day followed immediately after the Tips and Tricks talk, and was a brand new talk for me.  I have to throw out a thank-you to Phil for letting me see his MIX slide deck before he gave his talk, as that was a big help.  The official whats new document online is also worth checking out if youre interested in this subject.  Download the Whats New in ASP.NET MVC 2 slides and demos.   SOLIDify Your ASP.NET MVC 2 Application Just because youre using a ASP.NET MVC doesnt mean your code cant still end up being a big ball of mud.  This session describes a number of principles of software design that can help ensure applications remain loosely-coupled and malleable even as they age and increase in features and complexity.  This was my last talk of the day and did have one minor demo failure involving a database constraint.  Ive given this talk many times before, and in this case I had to fit it into a 60-minute timeslot, so Im not sure I had quite enough time to drive home all of the concepts to everyone in the audience.  That said, I did hear a number of positive comments on how the talk went, so thats encouraging. Download the SOLIDify Your ASP.NET MVC 2 Application slides and demos.   In my sessions, I promised to have these posted by the end of the weekend theyre going up at 10pm Sunday night (my time) 2 hours to spare!  Enjoy! Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Configuring Full-Text Search for pdf and docx files

    - by Lukasz Kurylo
    I think in may I was creating a little filters module based on Full Text-Search. I have configured my dev machine, the same for two testing servers – in our company for internal testing before we deployed it to client, and then on the testing client server. Until last week this build  was still on the testing server and finally we got feedback that we can deploy it on the production one. I only say that, I lost half a day because I had not correctly remembered what I was doing to configure the FTS on the previous servers and I had no notes for that. I foolishly believed in my memory. Lesson learned.   For future reference a bunch of steps to configure the FTS for searching in *.pdf and *.docx files (and by the way in other Office files like *.xlsx).   1. From the page (link) download and install the *.pdf IFilter for FTS. 2. To the PATH global system variable add path to the catalog, where you installed the plugin. Default for this version is: C:\Program Files\Adobe\Adobe PDF iFilter 9 for 64-bit platforms\bin 3. From the page (link) download a FilterPackx64.exe and install it. 4. Now from SSMS execute the following procedures: -sp_fulltext_service 'load_os_resources',1 -sp_fulltext_service 'verify_signature', 0 5. Restart the server 6. Now we must check if the plugins are visible: -select document_type, path from sys.fulltext_document_types where document_type = '.pdf' -select document_type, path from sys.fulltext_document_types where document_type = '.docx' 7. If we see a result, then we can assume that everything is ok*. 8. Right now we can create a catalog for FTS and indexes on appropriate columns.     *I lost a lot of hours to find out, why the plugin for the *.pdf files wasn’t indexed any file in the database, but in the sys.fulltext_document_types table there was available a line for this plugin. After the deeper investigation I found that the *.pdf files actually were indexed. At least the EOF sign was added to the indexes and nothing more for each file. In the end the problem was that, I forgot to add the /bin in the path to the plugin in PATH variable..

    Read the article

  • Data Model Dissonance

    - by Tony Davis
    So often at the start of the development of database applications, there is a premature rush to the keyboard. Unless, before we get there, we’ve mapped out and agreed the three data models, the Conceptual, the Logical and the Physical, then the inevitable refactoring will dog development work. It pays to get the data models sorted out up-front, however ‘agile’ you profess to be. The hardest model to get right, the most misunderstood, and the one most neglected by the various modeling tools, is the conceptual data model, and yet it is critical to all that follows. The conceptual model distils what the business understands about itself, and the way it operates. It represents the business rules that govern the required data, its constraints and its properties. The conceptual model uses the terminology of the business and defines the most important entities and their inter-relationships. Don’t assume that the organization’s understanding of these business rules is consistent or accurate. Too often, one department has a subtly different understanding of what an entity means and what it stores, from another. If our conceptual data model fails to resolve such inconsistencies, it will reduce data quality. If we don’t collect and measure the raw data in a consistent way across the whole business, how can we hope to perform meaningful aggregation? The conceptual data model has more to do with business than technology, and as such, developers often regard it as a worthy but rather arcane ceremony like saluting the flag or only eating fish on Friday. However, the consequences of getting it wrong have a direct and painful impact on many aspects of the project. If you adopt a silo-based (a.k.a. Domain driven) approach to development), you are still likely to suffer by starting with an incomplete knowledge of the domain. Even when you have surmounted these problems so that the data entities accurately reflect the business domain that the application represents, there are likely to be dire consequences from abandoning the goal of a shared, enterprise-wide understanding of the business. In reading this, you may recall experiences of the consequence of getting the conceptual data model wrong. I believe that Phil Factor, for example, witnessed the abandonment of a multi-million dollar banking project due to an inadequate conceptual analysis of how the bank defined a ‘customer’. We’d love to hear of any examples you know of development projects poleaxed by errors in the conceptual data model. Cheers, Tony

    Read the article

  • Oracle GoldenGate 11gR2 Event Marker System

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Oracle GoldenGate 11gR2 includes a number of refinements to the Event Marker system. Using event markers enables GoldenGate processes to take a defined action based on an event in the data stream. This feature within Oracle GoldenGate simplifies methods to embed specific custom processing in the areas of error handling, alerts, and notification. The event marker system effectively allows for DML driven workflows to be created within GoldenGate and enables customers to craft non-standard processing based on special events. There are a number of supported event actions including: trace, log, checkpoint before, suspend, abort, and several others. With 11gR1 events can now be triggered by DDL operations, plus variables can be passed in and out of the system to shell scripts. Some good use cases for this feature are Automatic switchover to the secondary system during planned outages Better monitoring over source systems’ performance and automated switchover to the standby system in case of an outage with the primary system Automatic switchover from initial load to changed data movement Automatic synchronization of any type of batch processing taking place on both the source and target databases for database consistency Automatic stoppage of the Delivery module to allow end-of-day reporting Finding, tracking, and reporting on transactions that are of interest including the ones that do not have primary keys or transaction record numbers If you would like to see a demo, please visit our youtube channel (http://youtube.com/oraclegoldengate)  To learn more about the new features of Oracle GoldenGate 11gR2 and to ask questions to the PM team, please join us on September 12th  8am or 10am PST for our live webcast. Click here to register.

    Read the article

  • 3 Reasons You Need To Know Something About Every Technology

    - by Tim Murphy
    I make my living as a consultant and a general technologist.  I credit my success to the fact that I have never been afraid to pick up any product, language or platform needed to get the job done.  While Microsoft technologies I my mainstay, I have done work on mainframe and UNIX platforms and have worked with a wide variety of database engines.  Each one has it’s use and most times it is less expensive to find a way to communicate with an existing system than to replace it. So what are the main benefits of expending the effort to learn a new technology? New ways to solve problems Accelerate development Advise clients and get new business opportunities By new technology I mean ones that you haven’t had experience with before.  They don’t have to be the the one that just came out yesterday.  As they say, those who do not learn from history are bound to repeat it.  If you can learn something from an older technology it can be just as valuable as the shiny new one.  Either way, when you add another tool to your kit you get a new view on each problem you face.  This makes it easier to create a sound solution. The next thing you can learn from working with different products and techniques is how to more efficiently develop solve problems.  Many times if you are working with a new language you will find that there are specific design patterns that are used with it in normal use.  These can usually be applied with most languages.  You just needed to be exposed to them. The last point is about helping your clients and helping yourself.  If you can get in on technologies early you will have advantage over your competition in the market.  You will also be able to honestly advise you client on why they should or should not go with a new product.  Being able to compare products and their features is always an ability that stake holders appreciate. You don’t need to learn every detail of a product.  Learn enough to function and get an idea of how to use the technology.  Keep eating those technology Wheaties and you will be ready to go the distance in any project. del.icio.us Tags: Technology,technologists,technology generalist,Software Architecture

    Read the article

  • Techniques to re-factor garbage and maintain sanity?

    - by Incognito
    So I'm sitting down to a nice bowl of c# spaghetti, and need to add something or remove something... but I have challenges everywhere from functions passing arguments that doesn't make sense, someone who doesn't understand data structures abusing strings, redundant variables, some comments are red-hearings, internationalization is on a per-every-output-level, SQL doesn't use any kind of DBAL, database connections are left open everywhere... Are there any tools or techniques I can use to at least keep track of the "functional integrity" of the code (meaning my "improvements" don't break it), or a resource online with common "bad patterns" that explains a good way to transition code? I'm basically looking for a guidebook on how to spin straw into gold. Here's some samples from the same 500 line function: protected void DoSave(bool cIsPostBack) { //ALWAYS a cPostBack cIsPostBack = true; SetPostBack("1"); string inCreate ="~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"; parseValues = new string []{"","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","",""}; if (!cIsPostBack) { //....... //.... //.... if (!cIsPostBack) { } else { } //.... //.... strHPhone = StringFormat(s1.Trim()); s1 = parseValues[18].Replace(encStr," "); strWPhone = StringFormat(s1.Trim()); s1 = parseValues[11].Replace(encStr," "); strWExt = StringFormat(s1.Trim()); s1 = parseValues[21].Replace(encStr," "); strMPhone = StringFormat(s1.Trim()); s1 = parseValues[19].Replace(encStr," "); //(hundreds of lines of this) //.... //.... SQL = "...... lots of SQL .... "; SqlCommand curCommand; curCommand = new SqlCommand(); curCommand.Connection = conn1; curCommand.CommandText = SQL; try { curCommand.ExecuteNonQuery(); } catch {} //.... } I've never had to refactor something like this before, and I want to know if there's something like a guidebook or knowledgebase on how to do this sort of thing, finding common bad patterns and offering the best solutions to repair them. I don't want to just nuke it from orbit,

    Read the article

  • Another Custom Property Locator: a Library of Books

    - by Cindy McMullen
    Introduction The previous post gave an introduction to custom property locators and showed how create one using JDeveloper.  This post continues on the custom locator theme, with a slightly more complex locator: a library of books.  It demonstrates using the DAO pattern to delegate data access from the Locator, which is likely how many actual backing stores will integrate with the Locator.  You can imagine, rather than a library of books, the data store might be a user database of sorts.  The same sort of pattern would apply. This post uses the BookLocator example originally shown in the WebCenter documentation, but has: updated the source code to reflect the final Property APIs includes the steps for generating the namespace and property definition files via JDeveloper detailed usage of the PropertyService APIs Getting Started If you're new to JDeveloper, you might want to check out this tutorial.  There is also the "Jump-Start to using Personalization" blog post that you might find useful.  Otherwise, if you're already familiar with both, you can skip those tutorials and jump right in to using JDeveloper. Download the BookLocator.zip file (which has been updated from the original post) and unzip it to a new directory.  Start JDeveloper, navigate to the BookLocator.jws file, and open it.   It should look something like this: The Properties Namespace file contains the property definitions and property set definitions you define.  It is explained more in detail in the Namespace documentation.  Although this example doesn't show it, the property set definitions have the ability to reference multiple locators per property.   This can be done by right-clicking on the 'Locator Info' box.  Configure the contents of the Locator Map  by editing locators and mapping them to available property names in the property set definition. Compiling, deploying, and running your locator The rest of the steps in this tutorial basically follow those in the previous blog on custom locators, and won't be repeated here.   A scenario to invoke your locator is included with the sample app: see BookProperties.scenarios_diagram above.  Summary This post demonstrates a simple library of books accessed by the BookPropertyLocator via the DAO layer.  This is a useful pattern for more realistic property retrievals, such as a backing user store.  It also points out the possibility of retrieving properties from multiple locators, which would be quite handy to retrieve user attributes from multiple sources.

    Read the article

< Previous Page | 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033  | Next Page >