Search Results

Search found 2032 results on 82 pages for 'legacy systeme'.

Page 49/82 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Los Angeles Department of Building & Safety Lowers Customer Service Costs with Oracle WebCenter

    - by Kellsey Ruppel
    Register Now for this Webcast. Los Angeles Department of Building & Safety Lowers Customer Service Costs with Oracle WebCenter Los Angeles Department of Building & Safety (LADBS) is one of the largest construction permitting departments in the country, serving over 350,000 walk-in and 530,000 phone customers, and issuing over 110,000 permits worth $3 Billion every year. LADBS needed a way to migrate walk-in and phone transactions to customer self-service, so they turned to Oracle WebCenter and teamed with Oracle Partner 3Di to deliver a customer self-service portal to lower their cost of customer service operation, while increasing customer satisfaction. Attend this Webcast to learn how Oracle WebCenter has allowed Los Angeles Department of Building & Safety to: Deliver a state of the art customer self-service portal Reduce traffic on high cost, low satisfaction customer service channels Integrate business workflows and legacy applications Register Now for this Webcast. REGISTER NOW Register now for this exclusive event. Wednesday, November 14, 2012 10 a.m. PT / 1 p.m. ET Presented by: Giovani DacumosDirector of Systems, Los Angeles Department of Building & Safety Jing ReyesApplications Development Group Manager, Los Angeles Department of Building & Safety Rajiv Desai CEO, 3Di Sheetal ParanjpyeProject Manager, 3Di Presented by: Copyright © 2012, Oracle. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement

    Read the article

  • Brand New Annotations Support

    - by Ondrej Brejla
    Hi all! Today we would like to introduce you our brand new annotation support for NetBeans 7.2. The first thing which is different is the look of annotations in code completion. As you can see, there is a new annotation icon and an annotation type. Because we have a lot of modules with their own annotations, we differ them in code completion window by their type. We support annotations for: ApiGen (legacy PHPDoc annotations), PHPUnit, Doctrine 2 (ORM and ODM) and Symfony 2. Every annotation can be associated with some context. We recognize four of them: function, class/interface (type), method and field. It means that you will get just proper annotations for your class field as well as your global function. Do you have your own annotations? Or do you simply miss some? There is nothing hard to add it in there. We have a simple UI for adding your custom annotations! It's in Tools -> Options -> PHP -> Annotations. Here you can simply add, edit or delete your annotations. When you try to create new one, all fields are prefilled by some default values. So you really don't have to remember "how to use that crazy freemarker syntax". If you are satisfied with your new annotation, you can see it in a code completion window among other annotations. As you can see it has its own "Custom" type. That's all for today and as usual, please test it and if you find something strange, don't hesitate to file a new issue (component php, subcomponent Editor). Thanks.

    Read the article

  • Ubuntu 12.04.2 Dual boot UEFI Windows 8 Preinstalled CX21903W Ultrabook

    - by user180782
    Hi i have a problem trying to install ubuntu. The machine is a CX Ultrabook model CX.21903W Intel I5 with 500GB hard disk, 8 GB ram and 32 GB SSD. From Installing Ubuntu on a Pre-Installed Windows 8 (64-bit) System (UEFI Supported), and according to the steps guide: 1 - We create a partition from Win8 (70 GB) from the own win8 program. 2 - Confirm-SecureBootUEFI=True. 3 - From Win8, shift + Restart and from special menu we selected the UEFI Firmware Setting. 4 - From BIOS Option: ------Option 1) Disable Secure Boot. ------Option 2) Disable UEFI (Not Available) from Option 1: Three ways is available. With Secure Boot enable - We can't even boot ubuntu. A red windows saying Soft unproper signed. With Secure Boot disable - and this config in boot device order: ----1: UEFI: USB ----2: Windows Boot Manger ----3: Others and CSM (Compatibility Support Module): enable - GRUB appears and selecting try Ubuntu then a black windows appears and nothing happens. The same result if install ubuntu is selected. With Secure Boot disable - and this config in boot device order: ----1: USB (No UEFI) ----2: Windows Boot Manger ----3: Others and CSM (Compatibility Support Module): enable - GRUB appears and selecting try Ubuntu, - Ubuntu boots and we can install it even. 5 - Rebooting and just changing the boot order as ----1: Ubuntu [] ----2: Windows Boot Manger ----3: Others then nothings happens. 6 - Booting from LiveUSB again and, as per instructed, making Boot-Repair (A warning windows: Ubuntu is working in legacy mode.). 7 - Saving changes and rebooting, Grub works but selecting Ubuntu, a black windows appears and nothing happens. Selecting Win8, Win8 boots and works. Untill now we can't make the ubuntu installation. Any suggestion will be welcomed. kind regards and thanks in advance.

    Read the article

  • Dynamic endpoint binding in Oracle SOA Suite by Cattle Crew

    - by JuergenKress
    Why is dynamic endpoint binding needed? Sometimes a BPEL process instance has to determine at run-time which implementation of a web service interface is to be called. We’ll show you how to achieve that using dynamic endpoint binding. Let’s imagine the following scenario: we’re running a car rental agency called RYLC (Rent Your Legacy Car) which operates different locations. The process of renting a car is basically identical for all locations except for the determination which cars are currently available. This is depicted in the following diagram: There are three different implementations of the GetAvailableCars service. But how can we achieve calling them dynamically at run-time using Oracle SOA Suite? How to dynamically set the service endpoint There are just a couple of implementation steps we need to perform to enable dynamic endpoint binding: create a new SOA project in JDeveloper add a CarRental BPEL process add an external reference to the GetAvailableCars service within the composite create a DVM file containing the URI’s by which the services for the different locations can be accessed set the endpointURI property on the Invoke component calling the GetAvailableCars service (value is taken from the DVM file) Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Cattle crew,SOA binding,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Bulk Rename Tool is a Lightweight but Powerful File Renaming Tool

    - by Jason Fitzpatrick
    There’s no need to settle for overly simplistic file renaming tools as long as Bulk Rename Tool is around. It’s lightweight, insanely customizable, portable, and sure to make short work of any renaming task you throw at it. Bulk Rename Tool is a great portable application (available as an installed version if you crave context menu integration) that blasts through file renaming tasks. The main panel is intimidatingly packed with toggles and variables you can alter; this isn’t a one-click solution by any means. That said, once you get comfortable using the interface it’s lightening fast and extremely flexible. One tip that will save you an enormous amount of frustrating when you get started: make sure to highlight the files you want to change in the file preview window (located in the upper right corner) or else you won’t see the preview and won’t know if the changes you’re making in the control panel are yielding the file names you desire. Hit up the link below to read more and grab a copy; Bulk Rename Tool is free, Windows only. Bulk Rename Tool Latest Features How-To Geek ETC How To Make Disposable Sleeves for Your In-Ear Monitors Macs Don’t Make You Creative! So Why Do Artists Really Love Apple? MacX DVD Ripper Pro is Free for How-To Geek Readers (Time Limited!) HTG Explains: What’s a Solid State Drive and What Do I Need to Know? How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Bring the Grid to Your Desktop with the TRON Legacy Theme for Windows 7 The Dark Knight and Team Fortress 2 Mashup Movie Trailer [Video] Dirt Cheap DSLR Viewfinder Improves Outdoor DSLR LCD Visibility Lakeside Sunset in the Mountains [Wallpaper] Taskbar Meters Turn Your Taskbar into a System Resource Monitor Create Shortcuts for Your Favorite or Most Used Folders in Ubuntu

    Read the article

  • SharePoint Content Type Cheat Sheet

    - by Bil Simser
    PrincipleAny application or solution built in SharePoint must use a custom content type over adding columns to lists. The only exception to this is one-off solutions that have no life-cycle, proof-of-concepts, etc.Creating Content TypesWeb UI. Not portable, POC onlyC# or Declarative (XML). Must deploy these as FeaturesRuleDo not chagne the base XML for a Content Type after deploying. The only exception to this rule is that you can re-deploy a modified Content Type definition only after completely removing it from the environment (either programatically or by hand).Updating Content TypesUpdate and push down to child typesWeb UI. Manual for each environment. Document steps required for repeatability.Feature Upgrade. Preferred solution.C#. If you created the content type through code you might want to go this route. Create new modified Content Types and hide the old one. Not recommended but useful for legacy.ReferencesCreate Custom Content  Types in SharePoint 2010 (C#)Content Type Definitions  (XML)Creating Content Types (XML  and C#)Updating ApproachesUpdating Child Content TypesAgree or disagree?

    Read the article

  • Is the development of CLI apps considered "backward"?

    - by user61852
    I am a DBA fledgling with a lot of experience in programming. I have developed several CLI, non interactive apps that solve some daily repetitive tasks or eliminate the human error from more complex albeit not so daily tasks. These tools are now part of our tool box. I find CLI apps are great because you can include them in an automated workflow. Also the Unix philosophy of doing a single thing but doing it well, and letting the output of a process be the input of another, is a great way of building a set of tools than would consolidate into an strategic advantage. My boss recently commented that developing CLI tools is "backward", or constitutes a "regression". I told him I disagreed, because most CLI tools that exist now are not legacy but are live projects with improved versions being released all the time. Is this kind of development considered "backwards" in the market? Does it look bad on a rèsumè? I also considered all solutions whether they are web or desktop, should have command line, non-interactive options. Some people consider this a waste of programming resources. Is this goal a worthy one in a software project?

    Read the article

  • Installing the AMD Proprietary Drivers broke my 12.10 desktop

    - by Drybones5
    I decided to download and install the AMD Legacy Catalyst driver 12.6 from AMD's website. I ran the .run file and the GUI below in the second image appeared and I install it that way. On reboot, I saw the below first image, though I managed to open Firefox by Open the pictures by right click Open As. No windows, buttons, or launcher / environment. It took some time but I figured out how to remove the driver and got back to normal on the default open source drivers I had before. Purged the old drivers and reconfigured xorg to make sure. How should I be going about installing the AMD made drivers? Is it even compatible with Ubuntu 12.10 yet? And if so would I even need it for 3D heavy applications like Team Fortress 2, or other game applications? I didn't install Ubuntu just for Steam, I've been using it on and off for a few years. Valve has mentioned that they are working on graphics drivers with NVIDA, AMD, and Intel. Nvidia released their new driver on Steam Linux beta release. Is AMD supposed to also have a new driver coming out soon? I'm using an ATi Radeon HD 4850 1GB. My entire desktop after install the proprietary drivers - http://i.stack.imgur.com/jTbQz.jpg GUI for the AMD Catalyst install - http://i.stack.imgur.com/UkYWn.png

    Read the article

  • How do you cope with change in open source frameworks that you use for your projects?

    - by Amy
    It may be a personal quirk of mine, but I like keeping code in living projects up to date - including the libraries/frameworks that they use. Part of it is that I believe a web app is more secure if it is fully patched and up to date. Part of it is just a touch of obsessive compulsiveness on my part. Over the past seven months, we have done a major rewrite of our software. We dropped the Xaraya framework, which was slow and essentially dead as a product, and converted to Cake PHP. (We chose Cake because it gave us the chance to do a very rapid rewrite of our software, and enough of a performance boost over Xaraya to make it worth our while.) We implemented unit testing with SimpleTest, and followed all the file and database naming conventions, etc. Cake is now being updated to 2.0. And, there doesn't seem to be a viable migration path for an upgrade. The naming conventions for files have radically changed, and they dropped SimpleTest in favor of PHPUnit. This is pretty much going to force us to stay on the 1.3 branch because, unless there is some sort of conversion tool, it's not going to be possible to update Cake and then gradually improve our legacy code to reap the benefits of the new Cake framework. So, as usual, we are going to end up with an old framework in our Subversion repository and just patch it ourselves as needed. And this is what gets me every time. So many open source products don't make it easy enough to keep projects based on them up to date. When the devs start playing with a new shiny toy, a few critical patches will be done to older branches, but most of their focus is going to be on the new code base. How do you deal with radical changes in the open source projects that you use? And, if you are developing an open source product, do you keep upgrade paths in mind when you develop new versions?

    Read the article

  • Dlink DWA-643 ExpressCard / Atheros AR5008 can't connect to wifi networks

    - by Justin Kelly
    I've just purchased a D-Link DWA-643 Xtreme N ExpressCard Notebook Adapter - but it can't connect to my wireless network The card is listed on the FSF website and - refer links below: http://www.fsf.org/resources/hw/index_html/net/wireless/index_html/cards.html http://www.dlink.com.au/products/?pid=550 Ubuntu see the card as using the Atheros AR5008 chipset - refer image below The card lights up and I can see that available wifi networks using this card - so it seems to 'just work' on ubuntu 12.04 but when i try and connect to my networks - it fails I've tried setting the network to all the different options (WEP, WPA2, no encryption, etc.. b/g/n ) but ubuntu sill cant connect to it I've also installed wicd but still couldn't connect Has anyone got a DWA-643 to work in Ubuntu? Or does anyone have any suggestion on how to get it to connect?? Any help would be greatly appreciated Note: the laptop has built in wifi but its broadcom, works but with dialup speed connection - and i've had nothign but trouble using the boardcom drivers so purchased the FSF recommended PCI expresscard as i hoped it would 'just work' on the latest Ubuntu i've have tried to disable the built in wifi - broadcom - but even with the broadcom uninstall and unavailable it didnt help the dlink to connect previously I had MAC address filtering on the router - i've added the dlinks MAC - and also disabled MAC address filtering - still no luck lspci output below: 18:00.0 Network controller: Atheros Communications Inc. AR5008 Wireless Network Adapter (rev 01) Subsystem: D-Link System Inc Device 3a6f Flags: bus master, fast devsel, latency 0, IRQ 18 Memory at e4000000 (64-bit, non-prefetchable) [size=64K] Capabilities: [40] Power Management version 2 Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit- Capabilities: [60] Express Legacy Endpoint, MSI 00 Capabilities: [90] MSI-X: Enable- Count=1 Masked- Capabilities: [100] Advanced Error Reporting Capabilities: [140] Virtual Channel Kernel driver in use: ath9k Kernel modules: ath9k

    Read the article

  • How not to suffer from ideologists when you're a pragmatic person?

    - by Lukas Eder
    My story: I'm a pragmatic person. Sometimes, the most simple solution to a problem to get the job done is the one that fits best for me, if its not an utter blasphemy and reproach to any design principles. Check out my answer to this question on stackoverflow. Simple. Works. Was accepted. Could be improved. Is clearly not perfect. And along comes this guy. He downvotes me, comments on the question how his answer is better, more accurate etc and calls me "plain wrong". Reminds me of this comic strip. :-) While on stackoverflow I can laugh at these things because those people are far away, in the real world I'm suffering from ideologies every now and then. Heck, I'm not creating a miracle piece of software, I need to keep that huge legacy thing running, and it's an adventure to me every day. I don't have the time or passion to beautify my code (or other people's code) to that extent. My question(s): How do you deal with ideologies / ideologists, when you're a pragmatic person? How do you deal with pragmatism / pragmatists, when you're an ideologic person? I'm interested in both point of views. Tell me your experience. But please, be fair, somewhat objective, and understand that you may NOT be entirely correct and your opinion is NOT the only true one... :-)

    Read the article

  • How do I target a specific driver for libata kernel parameter modding?

    - by DanielSmedegaardBuus
    Sorry for the cryptic title. Not sure how to phrase it. This is it in a nutshell: I'm running a 22-disk setup, 19 of those in a ZFS array, 15 of those backed by three port multipliers attached to SATA controllers driven by the sata_sil24 module. When running full speed (SATA2, i.e. 3 Gbps), the operation is pretty quirky (simple read errors will throw an entire PMP into spasms for a long time, sometimes with pretty awful results). Booting with kernel parameter libata.force=1.5G to force SATA controllers into "legacy" speeds completely fixes all issues with the PMPs. Thing is, my ZFS pool is backed by a fast cache SSD on my ICH10R controller. Another SSD on this same controller holds the system. Doing libata.force=1.5G immediately shaves about 100 MB/s off the transfer rate of my SSDs. For the root drive, that's not such a big deal, but for the ZFS cache SSD, it is. It effectively makes the entire zpool slower for sustained transfers than it would've been without the cache drive. Random access and fs tree lookups, of course still benifit. I'm hoping, though, that there's some way to pass the .force=1.5G parameter on to just the three SATA controllers being backed by the sata_sil24 module. But listing the module options for this, no such option exists. Is this possible? And if so, how? Thanks :)

    Read the article

  • How to drastically improve code coverage?

    - by Peter Kofler
    I'm tasked with getting a legacy application under unit test. First some background about the application: It's a 600k LOC Java RCP code base with these major problems massive code duplication no encapsulation, most private data is accessible from outside, some of the business data also made singletons so it's not just changeable from outside but also from everywhere. no business model, business data is stored in Object[] and double[][], so no OO. There is a good regression test suite and an efficient QA team is testing and finding bugs. I know the techniques how to get it under test from classic books, e.g. Michael Feathers, but that's too slow. As there is a working regression test system I'm not afraid to aggressively refactor the system to allow unit tests to be written. How should I start to attack the problem to get some coverage quickly, so I'm able to show progress to management (and in fact to start earning from safety net of JUnit tests)? I do not want to employ tools to generate regression test suites, e.g. AgitarOne, because these tests do not test if something is correct.

    Read the article

  • developers-designers-testers interaction [closed]

    - by user29124
    Sorry for my bad English, and also you may not read this and waste your time, because it is just a lament of layman developer... Seems no one want to learn anything at my workplace. We have Mantis bug tracker, but our testers use google-docs for reports and only developers and team lead report bugs in Mantis. We have SVN for version control and use Smarty as template system, but our designers give us pure HTML (sometimes it's ugly for programmers, but mostly it's OK) in archives, and changes to design made by programmers go nowhere (I mean designers use their own obsolete HTML and CSS most of the time). We have a testing environment but designers don't have access with restricted accounts to it. So we can only ask them where to look for the problem and then investigate the problem by ourselves (and made changes to CSS by ourselves (that go nowhere most of the time...)). I will not mention legacy code without documentation, tests, or any requirements, just an absence of real interaction in triangle programmers-designers-testers. I'm not talking about using HAML, SASS, continuous integration, or something else, just about using basic tools by all participants of the development process. Maybe the absence of communication is not a problem in short-time projects, which will finish up in 2 months time but rather on the types of projects that lasts for years. Any comments please...

    Read the article

  • fglrx: No matching Device section for instance... found how to fix it?

    - by Lejo
    I have HD 4850 card, Ubuntu 12.10 and installed legacy drivers using makson96 ppa. The issue is, that FGLRX can not detect my device and loads vesa bios. I had the same problem on ubuntu 11.10, 12.04 versions. I want to manually help fglrx find a matching device to load as it shoudld do. It is interesting, why does fglrx search for a device in a PCI:0@1:0:1 Bus? in xorg.cof different bus is indicated: Section "Device" Identifier "aticonfig-Device[0]-0" Driver "fglrx" BusID "PCI:1:0:0" EndSection fglrxinfo display: :0.0 screen: 0 OpenGL vendor string: Advanced Micro Devices, Inc. OpenGL renderer string: ATI Radeon HD 4800 Series OpenGL version string: 3.3.11653 Compatibility Profile Context Here is a part of my xorg log: [ 3.846] (II) VESA: driver for VESA chipsets: vesa [ 3.846] (II) FBDEV: driver for framebuffer: fbdev [ 3.846] (++) using VT number 7 [ 3.846] (WW) Falling back to old probe method for fglrx [ 3.883] (II) Loading PCS database from /etc/ati/amdpcsdb [ 3.883] (--) Assigning device section with no busID to primary device [ 3.883] (--) Chipset Supported AMD Graphics Processor (0x9442) found [ 3.884] (WW) fglrx: No matching Device section for instance (BusID PCI:0@1:0:1) found [ 3.884] (II) AMD Video driver is running on a device belonging to a group targeted for this release [ 3.884] (II) AMD Video driver is signed [ 3.884] (II) fglrx(0): pEnt->device->identifier=0xb7791d8f [ 3.884] (WW) Falling back to old probe method for vesa [ 3.884] (WW) Falling back to old probe method for fbdev from lispci i foud out finally, that my video card is in 01:00.0 slot. logically, if fglrx searches for a vdeo card device in a wrong place, it will not find it. 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI RV770 [Radeon HD 4850] Thanks in advance.

    Read the article

  • Restoring GRUB2 on Software RAID 0 after Windows 7 wiped it using LiveCD

    - by unknownthreat
    I have installed Ubuntu 10.10 on my system. However, I need to install Windows 7 back, and I expect that it would alter GRUB and it did. Right now, my partition on my Software RAID 0 looks like this: nvidia_acajefec1 is Ubuntu 10.10 and nvidia_acajefec3 is Windows 7. I've been following some guides around and I am always stuck at GRUB not able to detect the usual RAID content. I've tried running: sudo grub > root (hd0,0) GRUB complains it couldn't find my hard disk. So I tried: find (hd0,0) And it complains that it couldn't find anything. So I tried: find /boot/grub/stage1 It said "file not found". Here's the text from the console: ubuntu@ubuntu:~$ grub Probing devices to guess BIOS drives. This may take a long time. [ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename. ] grub> root (hd0,0) root (hd0,0) Error 21: Selected disk does not exist grub> find /boot/grub/stage1 find /boot/grub/stage1 Error 15: File not found Fortunately, I got one person suggesting that what I've been trying to do is for GRUB Legacy, not GRUB2. So I went to the suggested website, ** (http://grub.enbug.org/Grub2LiveCdInstallGuide) **try to look around, and try: ubuntu@ubuntu:~$ sudo fdisk -l Unable to seek on /dev/sda This is just the step 2 of the instruction in the http://grub.enbug.org/Grub2LiveCdInstallGuide and I cannot proceed because it cannot seek /dev/sda. However, ubuntu@ubuntu:~$ sudo dmraid -r /dev/sdb: nvidia, "nvidia_acajefec", stripe, ok, 488397166 sectors, data@ 0 /dev/sda: nvidia, "nvidia_acajefec", stripe, ok, 488397166 sectors, data@ 0 So what now? Do you have an idea for how to make fdisk see my RAID array on live cd (Ubuntu 10.10)? Honestly, I am lost, very lost in trying to restore GRUB2 on this software RAID 0 system right now.

    Read the article

  • using a wiki for requirements

    - by apollodude217
    Hi, I'm looking into ways of improving requirements management. Currently, we have a Word document published on a Web site. Unfortunately, we cannot (to my knowledge) look at changes from one revision to the next. I would greatly prefer to be able to do so, much like with a wiki or VCS (or both, like the wiki's on bitbucket!). Also, each document describes changes devs are expected to meet by a given deadline. There is no collection of accumulated app features documented anywhere, so it's sometimes hard to distinguish between a bug and a (poorly-designed) feature when trying to make quick fixes to legacy apps. So I had an idea I wanted to get feedback on. What about: Using a wiki so that we can track who changed what when (mostly to even see if any edits were made since the last time one looked). Having one, say, wiki page per product rather than one per deadline, keeping up with all features of the product rather than the changes that should be implemented. This way, I can look at a particular revision of the page to see what the app should do at a given point in time, and I can look at changes to the page since the last release for the requirements to be implemented by the next deadline. Waddayathink?

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-27

    - by Bob Rhubart
    Understanding Oracle BI 11g Security vs Legacy Oracle BI 10g | Christian Screen "After conducting a large amount of Oracle BI 10g to Oracle BI 11g upgrades and after writing the Oracle BI 11g book," says Oracle ACE Christian Screen, "I still continually get asked one of the most basic questions regarding security in Oracle BI 11g; How does it compare to Oracle BI 10g? The trail of questions typically goes on to what are the differences? And, how do we leverage our current Oracle BI 10g security table schema in Oracle BI 11g?" Process Oracle OER Events using a simple Web Service | Bob Webster Bob Webster's post "provides an example of a simple web service that processes Oracle Enterprise Repository (OER) Events. The service receives events from OER and utilizes the OER REX API to implement simple OER automations for selected event types." Oracle Fusion Middleware Security: Attaching OWSM policies to JRF-based web services clients | Andre Correa "OWSM (Oracle Web Services Manager) is Oracle's recommended method for securing SOAP web services," says Oracle Fusion Middleware A-Team member Andre Correa. "It provides agents that encapsulate the necessary logic to interact with the underlying software stack on both service and client sides. Such agents have their behavior driven by policies. OWSM ships with a bunch of policies that are adequate to most common real world scenarios." His detailed post shows how to make it happen. WebCenter Content (WCC) Trace Sections | ECM Architect ECM Architect Kevin Smith shares a detailed technical post covering WebCenter Content (WCC) Trace sections. Thought for the Day "A complex system that works is invariably found to have evolved from a simple system that worked." — John Gall Source: SoftwareQuotes.com

    Read the article

  • Recommened design pattern to handle multiple compression algorithms for a class hierarchy

    - by sgorozco
    For all you OOD experts. What would be the recommended way to model the following scenario? I have a certain class hierarchy similar to the following one: class Base { ... } class Derived1 : Base { ... } class Derived2 : Base { ... } ... Next, I would like to implement different compression/decompression engines for this hierarchy. (I already have code for several strategies that best handle different cases, like file compression, network stream compression, legacy system compression, etc.) I would like the compression strategy to be pluggable and chosen at runtime, however I'm not sure how to handle the class hierarchy. Currently I have a tighly-coupled design that looks like this: interface ICompressor { byte[] Compress(Base instance); } class Strategy1Compressor : ICompressor { byte[] Compress(Base instance) { // Common compression guts for Base class ... // if( instance is Derived1 ) { // Compression guts for Derived1 class } if( instance is Derived2 ) { // Compression guts for Derived2 class } // Additional compression logic to handle other class derivations ... } } As it is, whenever I add a new derived class inheriting from Base, I would have to modify all compression strategies to take into account this new class. Is there a design pattern that allows me to decouple this, and allow me to easily introduce more classes to the Base hierarchy and/or additional compression strategies?

    Read the article

  • Is .Net Going to Die As far as Server Apps and Desktop Apps are concerned? [closed]

    - by Graviton
    Possible Duplicate: What does Windows 8 mean for the future of .NET? The Windows 8 preview doesn't mention .Net, and the demo seems to showcase what HTML, CSS and Javascript can do on Windows 8 OS. The impression I get from watching it is that HTML , Javascript is going to figure prominently in Windows 8, even for the traditional windows desktop applications. That, couple with the fact that there is no mentioning of .Net 5 and Visual Studio 2012 or 2013( MS is pretty quick to announce the next generation VS tools) yet, makes me worry that sooner or later, Microsoft will abandon the .Net platform completely. Yes, not just abandoning Silverlight, but the .Net platform in general. Which means that all the desktop apps, server apps you wrote in .Net is going to be obsolete, much like how VB6 apps are now obsolete. Is .Net going to die? Of course you won't find that all .Net apps stop running tomorrow. But will there be a day-- even when at that time Microsoft is alive and kicking-- when .Net apps are looked upon as legacy apps in the way we perceive VB6 apps? Edit: I've changed the wording of the title, so it's not a dupe of existing question. Please take note.

    Read the article

  • HTG Explains: The Best and Worst Ways to Send a Resume

    - by Eric Z Goodnight
    With so many people looking for jobs, the slightest edge in your resume presentation has potential to make or break your chances. But not all filetypes or methods are created equal—read on to see the potential pitfalls your resume faces. In this article, we’ll explore what can go wrong in a resume submission, what can be done to counteract it, and also go into why a prospective employer might ignore your resume based on your method of sending a resume. Finally, we’ll cover the best filetypes and methods that can help get you that new job you’ve been looking for. What Sets Your Resume Apart? Latest Features How-To Geek ETC Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines The Citroen GT – An Awesome Video Game Car Brought to Life [Video] Four Awesome TRON Legacy Themes for Chrome and Iron Anger is Illogical – Old School Style Instructional Video [Star Trek Mashup] Get the Old Microsoft Paint UI Back in Windows 7 Relax and Sleep Is a Soothing Sleep Timer Google Rolls Out Two-Factor Authentication

    Read the article

  • Introducing the New Boot Framework in CE 7

    - by Kate Moss' Open Space
    CE 7 introduces a new boot loader framework, BLDR (platform\common\src\common\bldr\). Some people like its powerful and flexbility, others may feel its too complicate as a boot loader framework. Despite to the favor, it is already there; so let's take a look at its features. Unlike the previous BL framwork (CE7 still provides it in platform\common\src\common\boot\) is a monolithic library, the new framework has more architecture structure. It not only defines main body but also provides rich components, such as filesystem (BinFS/FAT), download transportations, display, logging and block devices: bios INT13, FAL, IDE, Flash ( and etc. Note that in the block device category, the FAL is for legacy FMD/FAL, Flash is for latest MSFlash. Some of you may have encountered MSFlash MDD/PDD compatible partition is hard to created in bootloader and now it provides a clean solution! (Since this is a big topic, I will introduce it in future post) Today, I am going to show you some basic helper components - Image Loading functions. When OS image stored in the block device, it can be a file format, says your NK.BIN in the FAT volume or a RAW format, says the image is programmed to a BINFS partition. For the first one you can use BootFileSystemReadBinFile (platform\common\src\common\bldr\fileSystem\utils\fileSystemReadBinFile.c) and use BootBlockLoadBinFsImage (platform\common\src\common\bldr\block\utils\loadBinFs.c) to load from a partition. Need a sample code? No problem, the BootLoaderLoadOs in platform\cepc\src\boot\bldr\loados.c just provide a perfect example.

    Read the article

  • QA - Developer communication

    - by exiter2000
    I am a developer and have worked at this company 4~5 years by now. We have been practicing scrum for about 2 years. I think, I have been worked well with QAs. I believe QAs/developers/technical writers are all one team. We are also actively hiring new team members. As a legacy member of the team, I have faced to assist new member(including developers and testers) with my business knowledge. We work on 2 weeks base scrum. I usually deliver my user story completely by the first date of second week and do some qa build with partial functionality of my user story so that QA has a good idea about my implementation and flow. Recently, I have met some QAs. In first week, the QAs do not talk... In stand up meeting, they say they are developing test cases regardless I deliver the user story or not. In second week, I do not have a single defect till Thursday afternoon and suddenly I have a major defect with several minor UI defect, which I delivered one week ago. Or I have one or two minor defects on second week however major defects on Thursday afternoon or Friday morning. This eventually make the story rolls over to the next sprint. Major defect takes time to fix and more importantly it would trigger the regression test for the story... Even if I worked Thursday evening and fixed it, the testing will not finish. And this happens multiple times with certain QAs. As a same team member, I talked to the QAs if they could test major defect with higher priority... Rejected... Because I do not understand QA process.. So I asked roughly how many major test cases are covered so far in the stand up meeting on 2nd week Wednesday.. The response is I should not ask this to the QA in the stand up meeting... What do I do?

    Read the article

  • How would you TDD the functionality of getting the corresponding process of a running windows service?

    - by Matt Spinelli
    Purpose Over the last year or more I've been learning unit testing via books I've read recently like The Art of Unit Testing, Working Effectively with Legacy Code, and others. I've also been using unit tests, mocking frameworks, and the like, periodically at work and definitely see the value. However, I'm still having a hard time wrapping my mind around TDD (as opposed to TAD) when the situation calls for code that is gong to mostly use external API calls. Problem to solve Get the process associated with a windows service using the service name. example: Function GetProcess(ByVal serviceName As String) As Process Rules Show each major iteration in production & test code using TDD No need to see any other code or configuration that is required to get things to run. Just curious about the interfaces, concrete classes, and test methods. C# or VB.NET Must use the .Net framework regarding services/processes (i.e. System.Diagnostics.Process) Test Frameworks: Nunit or MSTest Isolation Frameworks: Moq, Rhino Mock, or Microsoft Moles Must write true unit tests (no integration tests) Additional notes As far as I can tell there are two approaches design wise. Use an Inversion of Control approach along with using the Adapter and/or Facade patterns to wrap the underlying .net framework objects dealing with processes and services. Keep the .net framework code in the class containing the Get Process method and use code detouring (interception) via Microsoft Moles to isolate the hard dependencies from the method under test.

    Read the article

  • Tuesday at Oracle OpenWorld 2012 - Must See Session: “Oracle Fusion Applications: Best Practices in Integration Design Patterns”

    - by Lionel Dubreuil
    Don’t miss this “CON8685 - Oracle Fusion Applications: Best Practices in Integration Design Patterns “ session: Speakers: Rajesh Raheja - Senior Director, Development, Oracle Ravi Sankaran - Director, Applications Development, Oracle Date: Tuesday, Oct 2 Time: 1:15 PM - 2:15 PM Location: Palace Hotel - Telegraph Oracle Fusion Applications provide various ways to integrate their functional capabilities with other Oracle applications as well as third-party and legacy applications. In this session, you will learn the patterns used when communicating with Oracle Fusion Applications with a SOA approach. It addresses items related to identifying the integration artifacts available, also known as assets, in Oracle Enterprise Repository; how to invoke synchronous and asynchronous Web services; importing and exporting bulk data; and any integration issues to look out for. The patterns will be applicable to on-premises and SaaS/cloud deployment modes and are indicated as such. Objectives for this session are to: Highlight the various ways to integrate with Oracle Fusion Applications Showcase use of Oracle Fusion Middleware technologies for integration Describe best practices and design patterns for integration Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";}

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >