Search Results

Search found 57106 results on 2285 pages for 'matthew baier@oracle com'.

Page 1944/2285 | < Previous Page | 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951  | Next Page >

  • Remote Task Flow vs. WSRP Portlets

    - by Frank Nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A remote task flow is bounded task flow that is deployed as a stand-alone Java EE application on a remote server with its URL Invoke property set to url-invoke-allowed. The remote task flow is accessed either from a direct browser GET request or, when called from another ADF application, through the task flow call activity. For more information about how to invoke remote task flows from a task flow call activity see chapter 15.6.4 How to Call a Bounded Task Flow Using a URL of the Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework at http://docs.oracle.com/cd/E23943_01/web.1111/b31974/taskflows_activities.htm#CHDJDJEF Compared to WRSP portlets, remote task flows in Oracle JDeveloper 11g R1 and R2 have a functional limitation in that they cannot be embedded as a region on a page but require the calling ADF application to navigate off to another application and page. The difference between a remote task flow call using the task flow call activity and a simple redirect to a remote Java EE application is that the remote task flow has a state token attached that allows to restore the state of the calling application upon task flow return. A use case for a remote task flow call activity is a "yellow page lookup" scenario in which different ADF applications use an remote task flow to lookup people, products or similar to return a selected value to the calling application. Note that remote task flow calls need to be performed from a bounded or unbounded top level task flow of the calling application. If called from a region (using the parent call activity) in a page, the region state is not recovered upon task flow return. ADF developers recently have identified remote task flows as an architecture pattern to partition their ADF applications into independently deployed Java EE applications. While this sounds like a desirable use of the remote task flow feature, it is not possible to achieve for as long as remote task flows don't render as an ADF region.

    Read the article

  • Acer Allionone Z5810 Touchscreen Issues 12.04

    - by Johannes
    I have an Acer Allionone Z5810, and I can't get the touchscreen to work after I install 12.04. Here is the lsusb output: Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 003: ID 0596:0508 MicroTouch Systems, Inc. Bus 001 Device 004: ID 04b8:0005 Seiko Epson Corp. Printer Bus 001 Device 005: ID 07ca:1336 AVerMedia Technologies, Inc. Bus 002 Device 003: ID 04ca:0058 Lite-On Technology Corp. Bus 002 Device 004: ID 0a12:0001 Cambridge Silicon Radio, Ltd Bluetooth Dongle (HCI mode) Bus 002 Device 005: ID 04f2:b23f Chicony Electronics Co., Ltd xinput --list Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? Lite-On Technology Corp. Wireless Device id=9 [slave pointer (2)] ? ? Lite-On Technology Corp. Wireless Device id=10 [slave pointer (2)] ? Virtual core keyboard id=3 [master keyboard (2)] ? Virtual core XTEST keyboard id=5 [slave keyboard (3)] ? Power Button id=6 [slave keyboard (3)] ? Power Button id=7 [slave keyboard (3)] ? Lite-On Technology Corp. Wireless Device id=8 [slave keyboard (3)] ? USB 2.0 camera id=11 [slave keyboard (3)] ? AT Translated Set 2 keyboard id=12 [slave keyboard (3)] My xorg.conf contains: nvidia-xconfig: X configuration file generated by nvidia-xconfig nvidia-xconfig: version 304.48 (buildmeister@swio-display x86-rhel47-04.nvidia.com) Sun Sep 9 21:31:39 PDT 2012 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" InputDevice "TouchScreen" EndSection Section "Files" EndSection Section "InputDevice" Identifier "TouchScreen" Driver "microtouch" Option "Type" "finger" Option "Device" "/dev/ttyS3" Option "ScreenNo" "0" Option "MinX" "0" Option "MaxX" "16383" Option "MinY" "0" Option "MaxY" "16383" Option "SendCoreEvents" "yes" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" HorizSync 28.0 - 33.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • Memory Glutton

    - by AreYouSerious
    I have to admit that I can't get enough storage. I have hard drives just sitting around in case I need to move somthing, or I'm going to a friends and either they want something I have or I want something they might have. What I'm going to talk about today is cost effective memory for devices. I don't know how this particualr device will work in a camera, as That's not what I use in my camera, in fact I don't have a camera that doesn't either use SD, or the old compact flash card, that's not so compact anymore. There's this thing that uses two micro sd cards to double the capacity of your memory, and it costs about 4 bucks, without the Micro SD card. I have had one for about a year and was going to throw it away because I couldn't get it to work with my computer, or with my Sony Reader. However I found out by one last ditch effort that this thing works beautifully with my Sony PSP. there is no software to speak of associated with this thing, you simply put in two SD cards of the same size... (if you put in two different sizes it will still work, you'll only double the smallest cards size though) and format through the psp. Viola you know have a 29 GB memory card for your PSP. why is this important ? well for starters you can carry more music and more videos. Second if you have gone the way of the hacker.... you can store more games on your card... There are just a few things you have to note.... I speak from experience... you have to use the usb connection to the PSP to do any file moving, as I said previously said card doesn't play well with my computers or card readers... I not saying it won't work at all, just hasn't work with anything I own. Second. If for some reason you try to Hack/crack your PSP don't attempt to delete a game from the psp, use the usb file browser to remove games. if you delete from the PSP you are likely to have to move all your files off, reformat and start again... just a couple things I have noticed... if I had done something like that.   anyway, Here's a link.... http://www.photofast-adapter.com/  and if you want to buy one, get it off ebay, I've seen them as low as $1.99

    Read the article

  • Multithreading: Communication from Parent thread to child thread

    - by Dennis Nowland
    I have a List of threads normally 3 threads each of the threads reference a webbrowser control that communicates with the parent control to populate a datagridview. What I need to do is when the user clicks the button in a datagridviewButtonCell corresponding data will be sent back to the webbrowser control within the child thread that originally communicated with the main thread. but when I try to do this I receive the following error message 'COM object that has been separated from its underlying RCW cannot be used.' my problem is that I can not figure out how to reference the relevant webbrowser control. I would appreciate any help that anyone can give me. The language used is c# winforms .Net 4.0 targeted Code sample: The following code is executed when user click on the Start button in the main thread private void StartSubmit(object idx) { /* method used by the new thread to initialise a 'myBrowser' inherited from the webbrowser control each submitters object is an a custom Control called 'myBrowser' which holds detail about the function of the object eg: */ //index: is an integer value which represents the threads id int index = (int)idx; //submitters[index] is an instance of the 'myBrowser' control submitters[index] = new myBrowser(); //threads integer id submitters[index]._ThreadNum = index; // naming convention used 'browser' +the thread index submitters[index].Name = "browser" + index; //set list in 'myBrowser' class to hold a copy of the list found in the main thread submitters[index]._dirs = dirLists[index]; // suppress and javascript errors the may occur in the 'myBrowser' control submitters[index].ScriptErrorsSuppressed = true; //execute eventHandler submitters[index].DocumentCompleted += new WebBrowserDocumentCompletedEventHandler(DocumentCompleted); //advance to the next un-opened address in datagridview the navigate the that address //in the 'myBrowser' control. SetNextDir(submitters[index]); } private void btnStart_Click(object sender, EventArgs e) { // used to fill list<string> for use in each thread. fillDirs(); //connections is the list<Thread> holding the thread that have been opened //1 to 10 maximum for (int n = 0; n < (connections.Length); n++) { //initialise new thread to the StartSubmit method passing parameters connections[n] = new Thread(new ParameterizedThreadStart(StartSubmit)); // naming convention used conn + the threadIndex ie: 'conn1' to 'conn10' connections[n].Name = "conn" + n.ToString(); // due to the webbrowser control needing to be ran in the single //apartment state connections[n].SetApartmentState(ApartmentState.STA); //start thread passing the threadIndex connections[n].Start(n); } } Once the 'myBrowser' control is fully loaded I am inserting form data into webforms found in webpages loaded via data enter into rows found in the datagridview. Once a user has entered the relevant details into the different areas in the row the can then clicking a DataGridViewButtonCell that has tha collects the data entered and then has to be send back to the corresponding 'myBrowser' object that is found on a child thread. Thank you

    Read the article

  • Welcome To The Nashorn Blog

    - by Homma
    ??? ??? jlaskey ??? Nashorn Blog ????????????? https://blogs.oracle.com/nashorn/entry/welcome_to_the_nashorn_blog ???????? ?? ??????????????Nashorn ????????????????????????????????????????????????????????????????????????????? Nashorn ?????????????????????????????????????????????????????? ????? JavaOne ??????????Nashorn ???????????????????????????????? Georges Saab ????????????????????????????????????????????????????????????????? JavaOne ????????????????????????????????? ?Nashorn: JVM ??? JavaScript ????????????? ?????????(????????)??????????????????????????????????????????????????JVM ???????????????????????????????????????????????????????????????? ?Nashorn: JVM ?? JavaScript? ??? Nashorn ???????????????????????????????????????????????????Nashorn ???????????????????????????250 ??????????????????????????Twitter ? Sam Pullara ??? Mustache.js ???????(Rhino ? 20 ?????)???????????NetBeans ? John Ceccarelli ??? Nashorn ? Netbeans ??????????????????????????????? Q & A ???????????????? ?Nashorn JavaScript Team ???? Michel ? Attila ? Marcus ??? Q & A ??????????????(??????????????????????????)?????????? Node.jar ??????????Nashorn + Node.jar ???????????????????????????????? Node.jar ? Akhil ??????????????????? ?Nashorn ? Node ? Java Persistence? Doug Clarke ? Akhil ?????????????????(????????? ???????)?????? Q & A ??????????? 80 ??????? ?????? Node.jar ????????Doug ? Nashorn + JPA ??????? ?????????????Nashorn ????????????????? ????????????????? : Nashorn ? Java ???????? Attila ??????? Dynalink ? Nashorn ??????????????????????????????????????????????????????JVM ???????????????????????????????????? ????JavaOne ?? Java ??????????? JVM ??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ????????? Nashorn ???? how to ?????????????????????????????????????????????? ??????????!

    Read the article

  • SQL SERVER – Backup SQL databases to Box or SkyDrive

    - by Pinal Dave
    To ensure your SQL Server or Azure databases remain safe, you should backup your databases periodically. And it is important to store the backups in a reliable location. Microsoft SkyDrive currently offers 7GB free, Box offers 5GB free – both are reliable and it is simple to send your backups there. SQLBackupAndFTP in it’s latest version 9 added the option to backup to SkyDrive and Box ( in addition to local/network folder, NAS drive, FTP, Dropbox, Google Drive and Amazon S3). Just select the databases that you’d like to backup and select to store the backups in SkyDrive or Box. Below I will show you how to do it in details Select databases to backup First connect to your SQL Server or Azure Sql Database. Then select the databases you’d like to backup. Connect to SkyDrive or Box cloud If you have a free version of SQLBackupAndFTP Box destination is included, but SkyDrive destination will be disabled as it is available in the Standard version or above. Click “Try now” to get 30 days trial on all options On the “SkyDrive Settings” form you’ll need to authorize SQLBackupAndFTP to access your SkyDrive. Click “Authorize…” to open SkyDrive authorization page in your browser, sign in your to SkyDrive account and click at “Allow” . On the next page you will see the field with authorization code. Copy it to the clipboard. Box operation is just the same. After that return to SQLBackupAndFTP, paste the authorization code and click “OK” . After you are authorized, you can enter the path to a backup folder. SQLBackupAndFTP will create the folder if it does not exist. That’s all what has to be done to backup to SkyDrive or Box cloud.  You can now click on “Run Now” button to test this job. Conclusion Whatever is your preference for storing SQL backups, it is easy with SQLBackupAndFTP. Note that at the time of this writing they are running a very rare promotion on volume licenses: 5–9 licenses: 20% off 10–19 licenses: 35% off more than 20 licenses: 50% off Please let me know your favorite options for storing the backups. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • ArchBeat Link-o-Rama for 2012-10-11

    - by Bob Rhubart
    Whiteboards, not red carpets. OTN Architect Day Los Angeles. Oct 25. Free event. Yes, it's TinselTown, but the stars at this event are experts in the use of Oracle technologies in today's architectures. This free event includes a full slate of technical sessions and peer interaction covering cloud computing, SOA, and engineered systems—and lunch is on us. Register now. Thursday October 25, 2012, 8:00 a.m. – 5:00 p.m. Sofitel Los Angeles, 8555 Beverly Boulevard, Los Angeles, CA 90048 JDeveloper extensions where? | Peter Paul van de Beek "Where does the downloaded stuff go after you installed JDeveloper extensions, like SOA Composite Editor, Oracle BPM Studio, or AIA Service Constructor?" Peter Paul van de Beek has the answer. Using Apache Derby Database with WebLogic (the express way) | Frank Munz Another technical how-to video from Dr. Frank Munz. Compensation Hello World | Ronald van Luttikhuizen Oracle ACE Director Ronald van Luttikhuizen's post addresses several question that came up during the "Effective Fault Handling in SOA Suite 11g" session that he and fellow Oracle ACE Guido Schmutz presented at Oracle OpenWorld. Oracle Fusion Middleware Security: OAM and OIM 11g Academies Looking for technical how-to content covering Oracle Access Manager and Oracle Identity Manager? The people behind the Oracle Middleware Security blog have indexed relevant blog posts into what they call "Academies." "These indexes," the blog explains, "contain the articles we've written that we believe provide long lasting guidance on OAM and OIM. Posts covered in these series include articles on key aspects of OAM and OIM 11g, best practice architectural guidance, integrations, and customizations." Maximum Availability Whitepaper for IDM 11gR2 | Oracle Fusion Middleware Security The Oracle Fusion Middelware A-Team shares an overview of and a link to a new white paper: "Identity Management 11.1.2 Enterprise Deployment Blueprint." Thought for the Day "The trouble with the world is that the stupid are sure and the intelligent are full of doubt." — Bertrand Russell (May 18, 1872 – February 2, 1970) Source: SoftwareQuotes.com

    Read the article

  • Why is it necessary to install EFI/rEFInd/UEFI/... on a SD Card since the Macbook Pro seems to already have it?

    - by user170794
    Dear askubuntu members, I own a Macbook Pro (late 2009) and when I boot the laptop and hold the alt key meanwhile, there is a EFI screen, so EFI is installed on... the firmware? I had a few troubles with my hard disk, so I had to change it, but I haven't installed OS X, I have only installed Ubuntu and still the EFI screen is there which is surely a good thing. As the new hard disk is making troubles again, I am using Puppy Linux, booting from a CD each time, which is unconfortable. So I am trying to have Ubuntu installed on a SD Card. After having spent many months on the internet grabing informations anywhere I can and trying several things, I applied this method: http://www.weihermueller.de/mac/ I succeeded in making one SD Card recognizable by the EFI of my laptop (holding alt key @ boot), but nothing installed on it yet as I fear to lose the recognizable-by-EFI part. I haven't succeded in producing the same result on another SD Card. I have a bootable USB key of Ubuntu (yipee) which works like a live CD, made with the help of Universal Linux UDF Creator, found there: http://www.pendrivelinux.com/universal-usb-installer-easy-as-1-2-3/ on which I have put Ubuntu 13.04 64bit, retrieved from the official deposits. Eventhough I have to add the "nouveau.noaccel=1" option to the grub command line launching Linux, it works (yipee again) properly as a live cd. When installing Ubuntu I come across the "where do I wanna put Ubuntu" window, I partition another SD Card in: the EFI part (40MB) the Linux part (15GB< <16GB) The installation works fine and finishes with no problem. But at the reboot, the SD Card where Linux is installed is not recognized by the EFI, the icons are : the CD (Puppy Linux), the USB stick (from Linux UDF Creator), the hard drive (the formerly-working Ubuntu 12) but no fourth icon of the SD Card whatsoever. As the title of this thread suggests, I am wondering: why there is a need for EFI to be installed on the SD Card since EFI seems to be on my laptop anyway? why EFI has to be on a different partition than the Linux's one? How do both parts communicate? why the EFI part on the SD Card made with the help of the live-USB key isn't recognized? on the EFI partition, there is a folder named "EFI" which contains another folder named "ubuntu" which contains a file named "grubx64.efi", why is there a thing called grub? Is it the Linux's grub where one can chose either to boot, to boot in safe mode, etc.? Thank you for your patience, looking forward for any kind of answer, Julien

    Read the article

  • Ubuntu won't connect to wired network

    - by djeikyb
    I'm running 10.04, upgraded from 9.10, maybe, but probably not upgraded from 9.04. I have two wifi routers. Zeus is connected to the dsl modem. Hermes uses a wds bridge with Zeus to extend the network. My desktop (Daedalus) is ethernetted to Hermes. My laptop (Clyde) is wifi, switching to Hermes or Zeus as needed. Occasionally, as in whenever I transfer a large file from desktop to laptop, the wds bridge will die. Fixing it means restarting both routers, though it seems Hermes should boot first. This is ridiculous, and eventually I'll get around to asking you guys to help me stop it from happening. More importantly is that my desktop requires a reboot to get back on the network. WTF. ifconfig shows my nic has no ip. /etc/init.d/networking restart doesn't do anything, not even give me a lousy ip. dhcpcd eth1 grants me an ip address, but doesn't help with internet access. route -n shows what looks like my normal routing table, but pinging google.com informs me it's an unknown host. jake@daedalus:~$ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.1.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 0.0.0.0 10.1.1.1 0.0.0.0 UG 0 0 0 eth1 It may be worth noting that I can ping both Zeus (10.1.1.1) and Hermes (10.1.1.4) and my laptop (10.1.1.55). Much obliged for any help. Rebooting is, well, trivial in this instance. But it's stupid. I switched to linux because I like the idea that if one part breaks, you fix it instead of reboot reboot reboot. I've left my poor desktop in disarray, confining myself to my little netbook. My desktop is broken, awaiting magical commands from you brilliant folk. (and yes, i know clyde the netbook should be named icarus. it was its original name. ironically the ssd burned out, and i felt it wasn't right when it came to reinstalling)

    Read the article

  • Puppet: Getting Started On Windows

    - by Robz / Fervent Coder
    Originally posted on: http://geekswithblogs.net/robz/archive/2014/08/07/puppet-getting-started-on-windows.aspxNow that we’ve talked a little about Puppet. Let’s see how easy it is to get started. Install Puppet Let’s get Puppet Installed. There are two ways to do that: With Chocolatey: Open an administrative/elevated command shell and type: choco install puppet Download and install Puppet manually - http://puppetlabs.com/misc/download-options Run Puppet Let’s make pasting into a console window work with Control + V (like it should): choco install wincommandpaste If you have a cmd.exe command shell open, (and chocolatey installed) type: RefreshEnv The previous command will refresh your environment variables, ala Chocolatey v0.9.8.24+. If you were running PowerShell, there isn’t yet a refreshenv for you (one is coming though!). If you have to restart your CLI (command line interface) session or you installed Puppet manually open an administrative/elevated command shell and type: puppet resource user Output should look similar to a few of these: user { 'Administrator': ensure => 'present', comment => 'Built-in account for administering the computer/domain', groups => ['Administrators'], uid => 'S-1-5-21-some-numbers-yo-500', } Let's create a user: puppet apply -e "user {'bobbytables_123': ensure => present, groups => ['Users'], }" Relevant output should look like: Notice: /Stage[main]/Main/User[bobbytables_123]/ensure: created Run the 'puppet resource user' command again. Note the user we created is there! Let’s clean up after ourselves and remove that user we just created: puppet apply -e "user {'bobbytables_123': ensure => absent, }" Relevant output should look like: Notice: /Stage[main]/Main/User[bobbytables_123]/ensure: removed Run the 'puppet resource user' command one last time. Note we just removed a user! Conclusion You just did some configuration management /system administration. Welcome to the new world of awesome! Puppet is super easy to get started with. This is a taste so you can start seeing the power of automation and where you can go with it. We haven’t talked about resources, manifests (scripts), best practices and all of that yet. Next we are going to start to get into more extensive things with Puppet. Next time we’ll walk through getting a Vagrant environment up and running. That way we can do some crazier stuff and when we are done, we can just clean it up quickly.

    Read the article

  • Full disk encryption with seperate boot and encrypted keyfile storage: Two-Form Authentication

    - by Cain
    I am trying to setup true Full Disk encryption with two-form authentication on 12.04 and can not find out how to call a keyfile for the encrypted root out of another encrypted partition. All documentation or questions I am finding for whole or full disk encryption only encrypts separate partitions on the same disk. This is not what most are calling full disk encryption, /boot is not on a partition on the root drive, rather it is on a usb stick as sdx1. Instead root is on a logical partition on top of a LUKS container. Luks is run on the whole disk, encrypting the partition table as well. All drives in the machine are completely encrypted and to open it it requires a USB drive (what I have) as well as a passphrase (what I know) resulting in Two-Form Authentication to boot the machine. Device sdx cryptroot vg00 lvroot / There is no passphrase to open the encrypted root device, only a keyfile. That keyfile is kept on the usb drive with /boot, in its own encrypted partition (I'll call this cryptkey). In order for the root file system (cryptroot) to be opened, initramfs must ask for the passphrase to cryptkey on the usb drive, then use the keyfile inside that to open cryproot. I did manage to find what I think is the how-to I used to do this once before: http://wiki.ubuntu.org.cn/UbuntuHelp:FeistyLUKSTwoFormFactor I already have the system installed and can chroot into it, however, I can not get it to call for the keys on the USB during boot. I did find a how-to saying I needed to make a cryptroot conf for initramfs but, I believe that is for a passphrase: https://help.ubuntu.com/community/EncryptedFilesystemLVMHowto#Notes_for_making_it_work_in_Ubuntu_12.04_.22Precise_Pangolin.22_amd64 I also tried to setup crypttab. However, crypttab only works for drives mounted after the root drive as calling for a keyfile on a device not yet mounted to the system doesnt work. The Feisty how-to included scripts that would be run during boot instructing initramfs to mount the usb drive temporarily and call the keyfile for root which worked quite well except those scripts are outdated now, many of the things they relied on have been merged into something else, changed, or simply don't exist anymore. If I have missed a clear how-to for this, that would be wonderful, I just don't think I have.

    Read the article

  • Configurable tables in sql database

    - by dot
    I have the following tables in my database: Config Table: ====================================== Start_Range | End Range | Config_id 10 | 15 | 1 ====================================== Available_UserIDs ========================== ID | UserID | Used_YN | 1 | 10 | t | 1 | 11 | f | 1 | 12 | f | 1 | 13 | f | 1 | 14 | f | 1 | 15 | f | ========================== Users ========================== UserId | FName | LName | 10 |John | Doe | ========================== This is used in a reservation system of sorts... which lets an administrator specify a range of numbers that will be assigned to users in the config table. Once the range has been defined, the system then populates the Available_userIDs table with all the numbers in between the range, and sets the Used_YN flag to false As users sign up, they grab the next user_id number that's not in use... and reserve it. Then the system adds a record to the Users table. Once the admin has specified a range, it is possible that they can change it. For example, they can start with 10-15... and then when the range is used up, they should be able to specify another range like 16 - 99. I've put a unique constraint on the Available_UserIDs table, as well as on the Users table - to ensure that UserIds can't be duplicated. My questions are as follows: What's the best way to prevent the admins from using a range that's already in use? I thought of the following options: -- check either the Users table to see if the start range or ending range numbers are being used. If they are, assume that all the numbers in between are in use too, and reject the range. -- let them specify whatever they want, try to populate the Available_UserIDs table. If there are duplicates, just ignore that specific error message from the database and continue on. How do I find gaps in the number ranges? For example, if they specify 10-15, and then 20-25, it'd be nice to be able to somehow suggest on my web page that 16-19 is currently available. I found this article: http://stackoverflow.com/questions/1312101/how-to-find-a-gap-in-running-counter-with-sql But it only seems to return the first available number... so in my example above, it would only return the number 16. I'm sure there's a simpler way to do things that I'm overlooking!

    Read the article

  • Oracle Gave Me a Chance - ECEMEA Internship Programme

    - by FelixWehmeyer
    My name is Mohammad Raad and I started in the One Year Training program with Oracle on March 1, 2012. I graduated on September 2011 and started searching for a job. Starting your career, as you all know, is hard because some companies see you as a fresh graduate lacking experience and no one is willing to invest in you. I used to check the recruitment websites daily to see if there were openings to apply to, but unfortunately no one wanted to hire a zero year experience fresh graduate. One day I saw Oracle’s 1 year internship program advertised and that was what I needed. I applied but expected nothing to happen because I was used to applying and getting no replies, but they called and that was the start! I had my first interview over the phone and decided to go to Qatar and to continue to search for a job. Two weeks after arriving at Qatar, Oracle called me for a second interview in Lebanon so I booked a ticket on the same day because my interview was the next day I had my interview and went back to Qatar. On January 2012, I heard from Oracle that I was accepted and they choose me for this program, it was a day I will not forget! Starting on 1st March 2012, I was full of energy, willing to do anything to gain experience and prove that I can do it. What really give me a big push is my colleagues’ motivation and support especially from Youssef Halawi, my mentor and Rami Mattar because they believe in me and track my progress day-by-day. At Oracle, I meet customers, attend meetings, demos and presentations, partners’ events and online trainings. Now I’m focusing on a specific product that I really liked and aim to master by the end of my internship. So dear readers wish me good luck! I know that I will get the experience that I want, because from Oracle, a leader in its industry, you have the chance to grab experience as much as you can handle, simply because there are no limits to excellence. Do you want to find out more about the open roles within Oracle? Follow us on https://campus.oracle.com.

    Read the article

  • Common SOA Problems by C2B2

    - by JuergenKress
    SOA stands for Service Oriented Architecture and has only really come together as a concrete approach in the last 15 years or so, although the concepts involved have been around for longer. Oracle SOA Suite is based around the Service Component Architecture (SCA) devised by the Open SOA collaboration of companies including Oracle and IBM. SCA, as used in SOA suite, is designed as a way to crystallise the concepts of SOA into a standard which ensures that SOA principles like the separation of application and business logic are maintained. Orchestration or Integration? A common thing to see with many people who are beginning to either build a new SOA based infrastructure, or move an old system to be service oriented, is confusion in the purpose of SOA technologies like BPEL and enterprise service buses. For a lot of problems, orchestration tools like BPEL or integration tools like an ESB will both do the job and achieve the right objectives; however it’s important to remember that, although a hammer can be used to drive a screw into wood, that doesn’t mean it’s the best way to do it. Service Integration is the act of connecting components together at a low level, which usually results in a single external endpoint for you to expose to your customers or other teams within your organisation – a simple product ordering system, for example, might integrate a stock checking service and a payment processing service. Process Orchestration, however, is generally a higher level approach whereby the (often externally exposed) service endpoints are brought together to track an end-to-end business process. This might include the earlier example of a product ordering service and couple it with a business rules service and human task to handle edge-cases. A good (but not exhaustive) rule-of-thumb is that integrations performed by an ESB will usually be real-time, whereas process orchestration in a SOA composite might comprise processes which take a certain amount of time to complete, or have to wait pending manual intervention. BPEL vs BPMN For some, with pre-existing SOA or business process projects, this decision is effectively already made. For those embarking on new projects it’s certainly an important consideration for those using Oracle SOA software since, due to the components included in SOA Suite and BPM Suite, the choice of which to buy is determined by what they offer. Oracle SOA suite has no BPMN engine, whereas BPM suite has both a BPMN and a BPEL engine. SOA suite has the ESB component “Mediator”, whereas BPM suite has none. Decisions must be made, therefore, on whether just one or both process modelling languages are to be used. The wrong decision could be costly further down the line. Design for performance: Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: C2B2,SOA best practice,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • `make install` fails apparently due to typo, but not in makefile: How to find and fix?

    - by Archelon
    I'm trying to install the fujitsu-usb-touchscreen drivers from here, on Kubuntu 12.04 on my new Fujitsu LifeBook P1630. (See fujitsu-usb-touchscreen on kubuntu 13.04 (64-bit) on P1630: `make` errors.) I downloaded the .zip file, unzipped it, and ran make in the directory thus created; this all worked as expected. However, when I run sudo checkinstall (which invokes make install), things go less well. On the first attempt the installation aborted with the following error: make: execvp: /etc/init.d/fujitsu_touchscreen: Permission denied make: *** [install] Error 127 I eventually resolved this by $ sudo chmod +x /etc/init.d/fujitsu_touchscreen But although a second sudo checkinstall then does not give the execvp error, it still fails at a later stage, and the log (on stdout) shows this dpkg error: dpkg: error processing /home/archelon/fujitsu-touchscreen-driver/cybergene-fujitsu-usb-touchscreen-112fdb75b406/cybergene-fujitsu-usb-touchscreen-112fdb75b406_amd64.deb (--install): unable to create `/sys/module/fujitsu/usb/touchscreen/parameters/touch_maxy.dpkg-new' (while processing `/sys/module/fujitsu/usb/touchscreen/parameters/touch_maxy'): No such file or directory And, indeed, there is no /sys/module/fujitsu/usb/touchscreen/parameters/touch_maxy; there is, however, /sys/module/fujitsu_usb_touchscreen/parameters/touch_maxy, and this is presumably what was intended. But this incorrect filename does not appear in the makefile or any other file in the directory, at least not that I can find. Nor does it appear, as I discovered after running sudo checkinstall --install=no as suggested below, in the .deb package created by checkinstall. Where might such a typographical error be originating, and how would I go about fixing it? Edited to add: I'm viewing the contents of the .deb file with ark, Kubuntu's default tool. It contains only three files: control.tar.gz, data.tar.gz, and debian-binary. data.tar.gz contains the directory tree that appears to match up to the usual root filesystem, with /etc, /lib, /sys, and /usr directories. (Looking at other .deb files on my system, this structure appears to be typical.) Here's a screenshot: . (Full size.) Here's another screenshot showing that control.tar.gz contains three files, one of which is empty: . (Full size.) Here's the actual .deb file: https://www.dropbox.com/s/odwxxez0fhyvg7a/cybergene-fujitsu-usb-touchscreen_112fdb75b406-1_amd64.deb Edited 2013-09-28 to add: After reinstalling Kubuntu 12.04 again, this time recreating the /home partition (which, again, had been generated during an install of 13.04), I can no longer reproduce this error. I am still curious to know how the underscores got changed to slashes, but it looks as though nobody has any idea. It is perhaps also of interest to note that while I have still not successfully run checkinstall against this package, I have done make install; it requires the executabilization of /etc/init.d/fujitsu_touchscreen and the installation of hal, and the GUI freezes shortly after installation completes, and there is no particular new functionality afterwards that I have noticed, and the system can no longer resume from being suspended; however, this will be pursued elsewhere.

    Read the article

  • SQL – Biggest Concerns in a Data-Driven World

    - by Pinal Dave
    The ongoing chaos over Government Agency’s snooping has ignited a heated debate on privacy of personal data and its use by government and/or other institutions. It has created a feeling of disapproval and distrust among users. This incident proves to be a lesson for companies that are looking to leverage their business using a data driven approach. According to analysts, the goal of gathering personal information should be to deliver benefits to both the parties – the user as well as the data collector(government or business). Using data the right way is crucial, and companies need to deploy the right software applications and systems to ensure that their efforts are well-directed. However, there are various issues plaguing analysts regarding available software, which are highlighted below. According to a InformationWeek 2013 Survey of Analytics, Business Intelligence and Information Management where 541 business technology professionals contributed as respondents, it was discovered that the biggest concern was deemed to be the scarcity of expertise and high costs associated with the same. This concern was voiced by as many as 38% of the participants. A close second came out to be the issue of data warehouse appliance platforms being expensive, with 33% of those present believing it to be a huge roadblock. Another revelation made in this respect was that 31% professionals weren’t even sure how Data Analytics can create business opportunities for them. Another 17% shared that they found data platform technologies such as Hadoop and NoSQL technologies hard to learn. These results clearly pointed out that there are awareness and expertise issues that also need much attention. Unless the demand-supply gap of Business Intelligence professionals well versed in data analysis technologies is met, this divide is going to affect how companies make the most of their BI campaigns. One of the key action points that can be taken to salvage the situation, is to provide training on Data Analytics concepts. Koenig Solutions offer courses on many such technologies including a course on MCSE SQL Server 2012: BI Platform. So it’s time to brush up your skills and get down to work in a data driven world that awaits you ahead. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • No client internet access when setting up these iptables rules

    - by Siriss
    I have read many other posts but cannot figure this out. eth0 is my external connected to a Comcast modem. The server has internet access with no issues. eth1 is internal and running DHCP for the clients. I have DHCP working just fine, all my clients can get an IP and ping the server but they cannot access the internet. I am using ISC-DHCP-SERVER and have set /etc/default/isc-dhcp-server to INTERFACE="eht1" Here is my dhcpd.conf file located in /etc/dhcp/dhcpd.conf ddns-update-style interim; ignore client-updates; subnet 10.0.10.0 netmask 255.255.255.0 { range 10.0.10.10 10.0.10.200; option routers 10.0.10.2; option subnet-mask 255.255.255.0; option domain-name-servers 208.67.222.222, 208.67.220.220; #OpenDNS # option domain-name "example.com"; default-lease-time 21600; max-lease-time 43200; authoritative; } I have made the *net.ipv4.ip_forward=1* change in /etc/sysctl.conf here is my interfaces file: auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp iface eth1 inet static address 10.0.10.2 netmask 255.255.255.0 network 10.0.10.0 auto eth1 And finally- here is my iptables.conf file: # Firewall configuration written by system-config-firewall # Manual customization of this file is not recommended. *nat :PREROUTING ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] -A POSTROUTING -s 10.0.10.0/24 -o eth0 -j MASQUERADE #-A PREROUTING -i eth0 -p tcp --dport 59668 -j DNAT --to-destination 10.0.10.2:59668 COMMIT *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -i eth1 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 53 -j ACCEPT -A INPUT -m state --state NEW -m udp -p udp --dport 53 -j ACCEPT -A FORWARD -s 10.0.10.0/24 -o eth0 -j ACCEPT -A FORWARD -d 10.0.10.0/24 -m state --state ESTABLISHED,RELATED -i eth0 -j ACCEPT -A FORWARD -p icmp -j ACCEPT -A FORWARD -i lo -j ACCEPT -A FORWARD -i eth1 -j ACCEPT #-A FORWARD -i eth0 -m state --state NEW -m tcp -p tcp -d 10.0.10.2 --dport 59668 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT I am completely stuck. I cannot figure out why the clients cannot access the internet. Am I missing a service? Is a service not running? Any help would be greatly appreciated. I tried to be as thorough as possible but please let me know if I have missed something. Thank you!

    Read the article

  • I can't install Ubuntu 12.04.1 on iMac G5

    - by user89004
    So, I have this iMac G5 that doesn't have iSight, only a small light sensor I think undernieth, machine model 8.2. I tried burning a Ubuntu 12.04.1 PowerPC 64bit .iso to a cd but the computer just won't boot it, I don't know why. Next I tried with a USB but it wouldn't let me boot that either, I created the usb on my dad's win7 laptop as the process was way easier than on freakin Mac or Ubuntu (no command typing AT ALL on windows) I'm able to get into openfirmware and type boot usb and it does show some weird writing that scrolls so fast I can't see anything and then it just gives me this huge no sign like a stop sign and freezez. The sign is grey and the line in the middle is tilted towards the left. An other issue I'm having with hdiutil is that I can't convert the stupid .iso I just downloaded into a .img because the file keeps on dissapearing right when it's done converting it. I used the syntax from Ubuntu support how to create a bootable usb drive under Mac OS X. I even didn't include the 2 stupid ~ that are shown in the syntax that are completly worthless, God only know why they put them there, and I even tried running the whole thing as root with sudo su before the command. The funny thing is that if I convert something smaller it works. The command I was using is hdiutil convert -format UDRW -o /path/to/target.img /path/to/ubuntu.iso I even tried hdiutil convert /path/to/ubuntu.iso -format UDRW -o /path/to/target.img but the same thing happens, the dummy .img.dmg file dissapears when the conversion is done no matter where I set the output file to go. I have tried several different folders, the same thing happens with all of them. I also tried burning a Ubuntu mini iso on a cd, can't remember if it was 11.10 or 12.10 but even thoguh holding c when the iMac boots up does show me the cd and I can boot from it, I get this weird error upon hitting install, it says something like invalid memory access, release keys and error strings I can't read. I don't have any original DVDs from this iMac and can't run hardware diagnostics. WHatever option I try at the command prompt from the mini ubuntu cd I get the same result, error code and openfirmware backdrop that's frozen. I noticed that the pen drive I created on my dads Win7 laptop is formated with MS-DOS but I can still mount it no problem, so it shouldn't have a problem booting it, right? I used the advice on ubuntu.com to make it, from here. Also, my partition is HFS+ so I can't use it as a hard drive and boot from it. I don' have 2 partitions either, just one HDD, one partition. Please help!!!

    Read the article

  • EPM Planning (Hyperion) V11.1.2 Implementation Hands-On Boot-camp

    - by Mike.Hallett(at)Oracle-BI&EPM
    5-Day Training for Partners: 29th October - 2nd November 2012, London (UK): REGISTER Here This FREE for Partners 5-day workshop is designed to provide implementation instruction on Oracle Hyperion EPM Planning.  This boot-camp is intended for prospective implementers of the Planning and Budgeting functionality of Oracle EPM or implementers that are currently familiar with the basics of EPM Planning and looking to strengthen their base of knowledge in the product. The class begins with an overview of Essbase, the foundation of Hyperion Planning. It provides a general overview of Planning and Planning terms, the architecture of all the Planning components, and how they are commonly used. The course goes over all the steps to create an application from scratch. This involves some preparation work outside of Planning and leads to developing the application in both the Planning Windows and Web clients. Participants will modify existing dimensions and build out the hierarchies using the Web client. Topics Covered The boot-camp shows developers how to build out dimensions using Classic Planning and by using EPMA. It covers the mechanics and cover strategies for automating the build process such as interface tables. It reviews data loads using Load Rules to load the Planning database. The course focuses on tasks that end-users must perform during the planning cycle. It walks students through creating and modifying forms, working with forms to enter data, adding annotations, and the rest of the form features such as running business rules and managing task lists. It covers how to use the forms in the Smart View client and finishes up the end-user perspective by going through Workflow Management and the process of submitting a plan for review. The final section of the course covers Security and other administration topics such as automation and deployment. Prerequisites Ideal participants are Oracle partners (SIs and resellers) with a background in business information systems and a clientele of customers with ongoing or prospective EPM initiatives. Alternatively, partners with the background described above and an interest in evolving their practice to a similar profile are suitable participants. Further online OPN guided learning path information and webinars are available at: Oracle Hyperion Planning 11 Essentials. Please note that attendees are required to bring a laptop. View here laptop requirements and detailed agenda. ·       REGISTER Here : acceptance is subject to availability and your place will be confirmed within two weeks  ( and for help see the Partner Registration Guide ). Training Location: Oracle Corporation UK Ltd Columbus Room Customer Visit Center 1 South Place London EC2M 2RB Training Dates: 29th October - 2nd November  9:30 am – 5:00 pm BST For more information please contact opnboot-camp_ww@oracle.com.

    Read the article

  • Introducing Windows Azure Mobile Services

    - by Clint Edmonson
    Today I’m excited to share that the Windows Azure Mobile Services public preview is now available. This preview provides a turnkey backend cloud solution designed to accelerate connected client app development. These services streamline the development process by enabling you to leverage the cloud for common mobile application scenarios such as structured storage, user authentication and push notifications. If you’re building a Windows 8 app and want a fast and easy path to creating backend cloud services, this preview provides the capabilities you need. You to take advantage of the cloud to build and deploy modern apps for Windows 8 devices in anticipation of general availability on October 26th. Subsequent preview releases will extend support to iOS, Android, and Windows Phone. Features The preview makes it fast and easy to create cloud services for Windows 8 applications within minutes. Here are the key benefits:  Rapid development: configure a straightforward and secure backend in less than five minutes. Create modern mobile apps: common Windows Azure plus Windows 8 scenarios that Windows Azure Mobile Services preview will support include:  Automated Service API generation providing CRUD functionality and dynamic schematization on top of Structured Storage Structured Storage with powerful query support so a Windows 8 app can seamlessly connect to a Windows Azure SQL database Integrated Authentication so developers can configure user authentication via Windows Live Push Notifications to bring your Windows 8 apps to life with up to date and relevant information Access structured data: connect to a Windows Azure SQL database for simple data management and dynamically created tables. Easy to set and manage permissions. Pricing One of the key things that we’ve consistently heard from developers about using Windows Azure with mobile applications is the need for a low cost and simple offer. The simplest way to describe the pricing for Windows Azure Mobile Services at preview is that it is the same as Windows Azure Websites during preview. What’s FREE? Run up to 10 Mobile Services for free in a multitenant environment Free with valid Windows Azure Free Trial 1GB SQL Database Unlimited ingress 165MB/day egress  What do I pay for? Scaling up to dedicated VMs Once Windows Azure Free Trial expires - SQL Database and egress     Getting Started To start using Mobile Services, you will need to sign up for a Windows Azure free trial, if you have not done so already.  If you already have a Windows Azure account, you will need to request to enroll in this preview feature. Once you’ve enrolled, this getting started tutorial will walk you through building your first Windows 8 application using the preview’s services. The developer center contains more resources to teach you how to: Validate and authorize access to data using easy scripts that execute securely, on the server Easily authenticate your users via Windows Live Send toast notifications and update live tiles in just a few lines of code Our pricing calculator has also been updated for calculate costs for these new mobile services. Questions? Ask in the Windows Azure Forums. Feedback? Send it to obileservices@microsoft.com.

    Read the article

  • How to host a site in another site - with little or no coding

    - by tunmise fasipe
    SUMMARY: All of these happens on Site A User visits site A User enter username and password User click on Login Button User authenticated on Site B behind the scene User is shown a page on Site A that contains his/her profile from Site B as layout/styled from Site B User can click links in the Profile page that links to other area in Site B Meaning: Session has to be maintained somehow I have web application where I store users' password and username. If you logon to this site, you can login with the password and username to have access to your profile. There is another option that requires you to login to my site from your site and have your profile displayed within your site. This is because you might already have a site that your clients know you with. This link is close to what I want to do: http://aspmessageboard.com/showthread.php?t=235069 A user on Site A login to Site B and have the information on site B showing in site A. He should not know whether Site B exists. It should be as if everything is happening in Site A This latter part is what I don't know to implement. I have these ideas: Have a fixed IFrame within your site to contain my site: but I am concerned about size/layout since different clients have different layout/size for their content section. I am thinking of how to maintain session too A webservice: I don't know how feasible this is since the Password and ID are on my server. You may have to send them back and forth. It means client would have to code with my API. But I am not just returning data, I have to show them a page that contains the profile details OpenID, Single-SignOn: Just guessing - but the authentication and data resides on my server. there is nothing to access on your side in this case Examples: like login into facebook within my site and still be able to do post updates, receive notifications Facebook implement some of these with IFrame e.g. the Like button *NOTE: * I have tested the IFrame option. It worked but I still have to remove my site specific content like my page Banner, Side Navigation etc. I was able to login normally as if I was actually on the site. This show my GUI but - style sheet was missing - content not styled with CSS - Any relative url won't work. It would look for that resource relative to the current server. Unless I change links to absolute - Clicking on the LogIn button produces this error: The state information is invalid for this page and might be corrupted. UPDATE: I was reading about REST webservice few days ago and I got this idea: What about the idea of returning an XML from a webservice [REST or SOAP] and providing an XSLT (that I can provide) to display it. Thus they won't have to do much coding?

    Read the article

  • ArchBeat Link-o-Rama for November 16, 2012

    - by Bob Rhubart
    X.509 Certificate Revocation Checking Using OCSP protocol with Oracle WebLogic Server 12c | Abhijit Patil Abhijit Patil's article focuses on how to use X.509 Certificate Revocation Checking Functionality with the OCSP protocol to validate in-bound certificates. Although this article focuses on inbound OCSP validation using OCSP, Oracle WebLogic Server 12c also supports outbound OCSP validation. Leveraging Oracle Scorecard and Strategy Management for Everyday BI Needs "Oracle Scorecard and Strategy Management (OSSM) is built-upon the premise that a scorecard system should not be separate from the BI system, like many comparable tools are today," says author Kevin McGinely. "Instead of a separate application with its own data, its own data definitions, and its own front-end, Oracle made the choice to integrate OSSM directly into OBIEE." Applying BI for personal productivity recognition and gamification | Capgemini Oracle Blog "It is quite obvious that if you want people to participate you need an appealing and intuitive user interface," says Capgemini's Henk Vermeulen in this interesting exploration of gamification in the enterprise. Build and release OSB projects with Maven | Edwin Biemond "With Maven we are able to build and deploy OSB projects," says Oracle ACE Edwin Biemond. "The artifacts generated by Maven called snaphosts and releases can be automatically uploaded to a software repository. These versioned OSB jars can then be downloaded by the OSB Servers and deployed." Biemond shows you how in this detailed technical post. ADF Generator for Dynamic ADF BC and ADF UI | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis' post is an extension of his OOW12 presentation, "Oracle ADF Implementations Around the Globe: Best Practices," and includes the sample application he promised to share. Service-oriented organizations have a head start in the cloud race | ZDNet ZDNet SOA blogger Joe McKendrick offers a snapshot of a recent report Forrester analyst James Staten. Oracle Fusion Middleware Security: X509 Fallback to Form | Debasish BhattacharyaOracle Fusion Middleware A-Team architect Debasish Bhattacharya shares a solution that resulted from brainstorming with colleagues Chris Johnson and Brian Eidelman. "The solution is not very difficult," says Bhattacharya, "though it needs some additional configurations and coding." It's all presented in this detailed post. Agile Architecture | David Sprott "There is ample evidence that Agile Architecture is a primary contributor to business agility, yet we do not have a well understood architecture management system that integrates with Agile methods," observes David Sprott in this extensive post. Thought for the Day "Operating systems are like underwear — nobody really wants to look at them." — Bill Joy Source: SoftwareQuotes.com

    Read the article

  • Northwind now available on SQL Azure

    - by jamiet
    Two weeks ago I made available a copy of [AdventureWorks2012] on SQL Azure and published credentials so that anyone from the SQL community could connect up and experience SQL Azure, probably for the first time. One of the (somewhat) popular requests thereafter was to make the venerable Northwind database available too so I am pleased to say that as of right now, Northwind is up there too. You will notice immediately that all of the Northwind tables (and the stored procedures and views too) have been moved into a schema called [Northwind] – this was so that they could be easily differentiated from the existing [AdventureWorks2012] objects. I used an SQL Server Data Tools (SSDT) project to publish the schema and data up to this SQL Azure database; if you are at all interested in poking around that SSDT project then I have made it available on Codeplex for your convenience under the MS-PL license – go and get it from https://northwindssdt.codeplex.com/. Using SSDT proved particularly useful as it alerted me to some aspects of Northwind that were not compatible with SQL Azure, namely that five of the tables did not have clustered indexes: The beauty of using SSDT is that I am alerted to these issues before I even attempt a connection to SQL Azure. Pretty cool, no? Fixing this situation was of course very easy, I simply changed the following primary keys from being nonclustered to clustered: [PK_Region] [PK_CustomerDemographics] [PK_EmployeeTerritories] [PK_Territories] [PK_CustomerCustomerDemo]   If you want to connect up then here are the credentials that you will need: Server mhknbn2kdz.database.windows.net Database AdventureWorks2012 User sqlfamily Password sqlf@m1ly You will need SQL Server Management Studio (SSMS) 2008R2 installed in order to connect or alternatively simply use this handy website: https://mhknbn2kdz.database.windows.net which provides a web interface to a SQL Azure server. Do remember that hosting this database is not free so if you find that you are making use of it please help to keep it available by visiting Paypal and donating any amount at all to [email protected]. To make this easy you can simply hit this link and the details will be completed for you – all you have to do is login and hit the “Send” button. If you are already a PayPal member then it should take you all of about 20 seconds! I hope this is useful to some of you folks out there. Don’t forget that we also have more data up there than in the conventional [AdventureWorks2012], read more at Big AdventureWorks2012. @Jamiet  AdventureWorks on Azure - Provided by the SQL Server community, for the SQL Server community!

    Read the article

  • Ubuntu 13.04 boot into black screen, even after installing nvidia drivers, fail at: "Starting Reload cups, upon starting avahi-daemon..."

    - by Elad92
    I got a new machine with i7 4770k and gefore gtx660. I installed windows and then installed Ubuntu 13.04. In the installation everything went well, after the installation I again boot from the USB and choose try ubuntu, and installed boot-repair because windows automatically boot with no option for ubuntu. After I repaired the boot, I restarted and got into grub, when I chose Ubuntu I got message saying I'm running on low graphics mode and when I pressed ok the screen turned off. I couldn't do anything so I restarted and after reading this post: My computer boots to a black screen, what options do I have to fix it? I got into recovery, selected dpkg and repaired packages. Then I rebooted, and tried to press 'e' and change quiet splash to no splash and nomodeset but unfortunately both of them didn't work, and not the screen didn't get turned off, but I just saw a blank screen. So I rebooted again, entered the recovery, and this time I went to the root shell, and tried to install nvidia drivers from this guide here: http://www.howopensource.com/2012/10/install-nvidia-geforce-driver-in-ubuntu-12-10-12-04-using-ppa/ I rebooted and got the same black screen again (the screen didn't go off, just saw a blank screen). In the grub I see that the kernel is 3.8.0-25. I also checked that the usb files are not corrupted using the check CD from the ubuntu installation screen. I'm really frustrated, I don't know what else can I do. (If someone also knows how can I connect to wifi using the root shell it will be very appreciated, because when I choose the 'enable networking' it again booting me to a black screen. Thanks Edit After digging more, I again boot with nomodeset and saw where is it failing, this is the lines I saw: * Starting Reload cups, upon starting avahi-daemon to make sure remote queues are populated [OK] * Starting Reload cups, upon starting avahi-daemon to make sure remote queues are populated [fail] I searched for it in google and this is the closest result I got: http://ubuntuforums.org/showthread.php?t=2144261 i didn't find any way to solve it in the internet, from what I understand this is a problem with 13.04. If someone knows how to fix it I will be very grateful. Thanks Edit 2 - 23.6.13 As Mitch suggested, I disabled the avahi-daemon, and when I boot in nomodeset I get the following error: What else can be the problem?

    Read the article

  • June 23, 1983: First Successful Test of the Domain Name System [Geek History]

    - by Jason Fitzpatrick
    Nearly 30 years ago the first Domain Name System (DNS) was tested and it changed the way we interacted with the internet. Nearly impossible to remember number addresses became easy to remember names. Without DNS you’d be browsing a web where numbered addresses pointed to numbered addresses. Google, for example, would look like http://209.85.148.105/ in your browser window. That’s assuming, of course, that a numbers-based web every gained enough traction to be popular enough to spawn a search giant like Google. How did this shift occur and what did we have before DNS? From Wikipedia: The practice of using a name as a simpler, more memorable abstraction of a host’s numerical address on a network dates back to the ARPANET era. Before the DNS was invented in 1983, each computer on the network retrieved a file called HOSTS.TXT from a computer at SRI. The HOSTS.TXT file mapped names to numerical addresses. A hosts file still exists on most modern operating systems by default and generally contains a mapping of the IP address 127.0.0.1 to “localhost”. Many operating systems use name resolution logic that allows the administrator to configure selection priorities for available name resolution methods. The rapid growth of the network made a centrally maintained, hand-crafted HOSTS.TXT file unsustainable; it became necessary to implement a more scalable system capable of automatically disseminating the requisite information. At the request of Jon Postel, Paul Mockapetris invented the Domain Name System in 1983 and wrote the first implementation. The original specifications were published by the Internet Engineering Task Force in RFC 882 and RFC 883, which were superseded in November 1987 by RFC 1034 and RFC 1035.Several additional Request for Comments have proposed various extensions to the core DNS protocols. Over the years it has been refined but the core of the system is essentially the same. When you type “google.com” into your web browser a DNS server is used to resolve that host name to the IP address of 209.85.148.105–making the web human-friendly in the process. Domain Name System History [Wikipedia via Wired] What is a Histogram, and How Can I Use it to Improve My Photos?How To Easily Access Your Home Network From Anywhere With DDNSHow To Recover After Your Email Password Is Compromised

    Read the article

< Previous Page | 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951  | Next Page >