Search Results

Search found 28627 results on 1146 pages for 'case statement'.

Page 587/1146 | < Previous Page | 583 584 585 586 587 588 589 590 591 592 593 594  | Next Page >

  • Top Ten Reasons to Attend the 2015 Oracle Value Chain Summit

    - by Terri Hiskey
    Need justification to attend the 2015 Oracle Value Chain Summit? Check out these Top Ten Reasons you should register now for this event: 1. Get Results: 60% higher profits. 65% better earnings per share. 2-3x greater return on assets. Find out how leading organizations achieved these results when they transformed their supply chains. 2. Hear from the Experts: Listen to case studies from leading companies, and speak with top partners who have championed change. 3. Design Your Own Conference: Choose from more than 150 sessions offering deep dives on every aspect of supply chain management: Cross Value Chain, Maintenance, Manufacturing, Procurement, Product Value Chain, Value Chain Execution, and Value Chain Planning. 4. Get Inspired from Those Who Dare: Among the luminaries delivering keynote sessions are former SF 49ers quarterback Steve Young and Andrew Winston, co-author of one of the top-selling green business books, Green to Gold. 5. Expand Your Network: With 1500+ attendees, this summit is a networking bonanza. No other event gathers as many of the best and brightest professionals across industries, including tech experts and customers from the Oracle community. 6. Improve Your Skills: Enhance your expertise by joining NEW hands-on training sessions. 7. Perform a Road-Test: Try the latest IT solutions that generate operational excellence, manage risk, streamline production, improve the customer experience, and impact the bottom line. 8. Join Similar Birds-of-a-Feather: Engage industry peers with similar interests, or shared supply chain communities, in expanded roundtable discussions. 9. Gain Unique Insight: Speak directly with the product experts responsible for Oracle’s Value Chain Solutions. 10. Save $400: Take advantage of the Super Saver rate by registering before September 26, 2014.

    Read the article

  • Trixbox CentOS Default GW Problem (Multi-homed server)

    - by slashp
    I'm having an issue with a CentOS trixbox server which is dual-homed (one private facing NIC [eth1], one internet-facing NIC [eth0]). I can't seem to get the default gateway to set properly to our ISP's GW via eth0. I've modified the /etc/sysconfig/network to contain both a GATEWAY & GATEWAYDEV line and removed the GATEWAY line from /etc/sysconfig/network-scripts/ifcfg-eth1 (as well as /etc/sysconfig/network-scripts/ifcfg-eth0). No default GW shows up in the routing table unless it's specified in the ifcfg-eth1 file (which both the wrong interface and wrong gateway IP), otherwise, the routing table simply does not contain a default gateway..any ideas would be greatly appreciated! Thanks! EDIT Just realized when attempting to add the default gateway manually using the route add command, I receive an error stating: SIOCADDRT: Network is unreachable I know this error can occur when your default gateway and interface IP address are not on the same subnet..in this case, my public IP address of eth0 is a /29.

    Read the article

  • How do I disable administrator prompt in Windows 8?

    - by Arnold Zokas
    I am using Windows 8 Enterprise on my development machine. Most of the time, I need full administrator for debugging, changing system files, etc. In Windows 7, setting UAC to "never notify" would disable any administrator prompts. In Windows 8 this is no longer the case. Even with UAC disabled I get prompted to grant programs elevated privileges. Is there a way disable this behaviour? Note: I am fully aware of the repercussions. I have antivirus, firewall, etc and am generally quite careful about what I download or install on my machine.

    Read the article

  • Gwibber only launches sometimes

    - by Stephen Judge
    I face this problem where Gwibber only launches sporadically. Sometimes when I click it to launch, it launches and then other times it doesn't. I can't seem to figure out what is preventing it from launching and what sort of information I need to collect, also where to collect it from to make a bug report. I have killed the gwibber-service processes in the System Monitor "it loads three processes called gwibber-service, is this normal" several times and tried to launch Gwibber again, but this doesn't seem to work. The process just called gwibber starts, then the three gwibber-service processes start, then the gwibbber process ends and the three gwibber-service processes remain but the application is still not launching. Generally, I want to know are other people facing the same problem. If someone can give me some guidance on how to triage this problem and get the information need to make a bug report I would be grateful. The upside to this though is at least when it is not launching it is preventing me from wasting endless hours reading my streams on Identi.ca and Twitter, so it is a bit Workrave for microblogging. In which case maybe I shouldn't fix this problem :-)

    Read the article

  • Best solution for getting referral information in PHP

    - by absentx
    I am currently redoing some link structuring on a website. In the past we have used specific php files on the last step to direct the user to the proper place. Example: www.mysite.com/action/go-to-blue.php or www.mysite.com/action/short/go-to-red.php www.mysite.com/action/tall/go-to-red.php We are now restructuring to eliminate the /short/ or /tall/ directory. What this means is now "go-to-blue.php" will be doing some extra processing to make sure it sends the visitor to the proper place. The static method of the past was quite effective, because, well, if they left from that page we knew we had it right. Now since we are 301 redirecting action/short/go-to-red.php to just action/go-to-red.php it is quite important on "go-to-red.php" that we realize a user may have been redirected from /short/ or /tall/. So right now I am using HTTP_REFERRER and of course in my testing that works fine, but after a lot of reading it is clear that this is not a solid solution, so I was starting to brainstorm on other ways to check and make sure we get the proper referral information. If we could check HTTP_REFERRER plus some other test, I would feel confident we have a pretty good system in place to send the visitor to the right place. Some questions/comments: Could I use a session variable or a cookie to accomplish this goal? If so, would that be maintained through the 301 redirect? I don't see why it wouldn't be.. Passing the url in the url is not an option in this case.

    Read the article

  • Why Wouldn't Root Be Able to Change a Zone's IP Address in Oracle Solaris 11?

    - by rickramsey
    You might assume that if you have root access to an Oracle Solaris zone, you'd be able to change the root's IP address. If so, you'd proceed along these lines ... First, you'd log in: root@global_zone:~# zlogin user-zone Then you'd remove the IP interface: root@user-zone:~# ipadm delete-ip vnic0 Next, you'd create a new IP interface: root@user-zone:~# ipadm create-ip vnic0 Then you'd assign the IP interface a new IP address (10.0.0.10): root@user-zone:~# ipadm create-addr -a local=10.0.0.10/24 vnic0/v4 ipadm: cannot create address: Permission denied Why would that happen? Here are some potential reasons: You're in the wrong zone Nobody bothered to tell you that you were fired last week. The sysadmin for the global zone (probably your ex-girlfriend) enabled link protection mode on the zone with this sweet little command: root@global_zone:~# dladm set-linkprop -p \ protection=mac-nospoof,restricted,ip-nospoof vnic0 How'd your ex-girlfriend learn to do that? By reading this article: Securing a Cloud-Based Data Center with Oracle Solaris 11 by Orgad Kimchi, Ron Larson, and Richard Friedman When you build a private cloud, you need to protect sensitive data not only while it's in storage, but also during transmission between servers and clients, and when it's being used by an application. When a project is completed, the cloud must securely delete sensitive data and make sure the original data is kept secure. These are just some of the many security precautions a sysadmin needs to take to secure data in a cloud infrastructure. Orgad, Ron, and Richard and explain the rest and show you how to employ the security features in Oracle Solaris 11 to protect your cloud infrastructure. Part 2 of a three-part article on cloud deployments that use the Oracle Solaris Remote Lab as a case study. About the Photograph That's the fence separating a small group of tourist cabins from a pasture in the small town of Tropic, Utah. Follow Rick on: Personal Blog | Personal Twitter | Oracle Forums   Follow OTN Garage on: Web | Facebook | Twitter | YouTube

    Read the article

  • How do I remove a LOT of indexed pages from Google?

    - by Thierry
    A few weeks ago we have figured out that Google has indexed some information we would rather keep in some confidentiality, in the format of individual PDF files. Our assumption was that this was a problem with our robots.txt we had overlooked. Even though we are not sure whether or not this is the case, we are certain that the robots.txt file is in a valid format and is, according to Google's webmaster tools, blocking the files. However, even after this adjustment that has been made weeks ago, Google still has the PDF files indexed, but does tell us further information cannot be provided due to the robots.txt file being present. As you can hopefully understand, this is unwanted behaviour due to the nature of the documents. I am aware that there is a request page being provided by Google for this purpose, but there are a lot of files. Is there an easier way to get Google to remove all of the files from its search engine? If not, is there anything else you could advise us to do besides manually requesting Google to remove every single page? Thanks in advance.

    Read the article

  • How to consolidate servers with the not-very-strong infrastructure

    - by Sim
    All, Situation We are in retail industry with about 10 distributors and use Solomon as the standard ERP for all our systems Each distributor has 1 HQ and 5 - 10 branches, each branch has their own server (Windows 2000/XP/2003 + Solomon + another built-in POS system) Everyday, branches has to extract data and send (via email/Skype) to HQ for data consolidation purpose When we first deployed our ERP, the infrastructure (e.g. Internet connection) wasn't reliable enough. That's why we went with the de-centralized model (each branch got their own server) Now, the infrastructure is mature already. And we need to consolidate data more quickly (not from branches -- HQ -- our company but something like HQ -- our company only) Goal We just have Solomon servers in distributor HQ. All the transactions in branches (retrieved from POS) will by synchronized with HQ server directly) There is a backup plan just in case the Internet goes down, or HQ server goes down Question With the above question, could you guys suggests some model for me ? Should we use Terminal services, any other solutions ? Any watchout/suggestions ? Any good article to read 'bout this ? Thanks a lot

    Read the article

  • Tell Us Once&ndash;Guardian Innovation Award Winner

    - by BizTalk Visionary
    Yesterday the Tell Us Once project received it’s latest accolade. My partner in crime in the execution of the delivery of software for this project, Mark Usher,  reports: It’s always great to receive recognition for the effort you put in when working on a project. It’s no secret that here at Solidsoft we are extremely proud of our association with the Government’s Tell Us Once (TUO) programme. Having already been selected by Microsoft as Worldwide Partner Conference (WPC) 2011 Award Winners for Application Integration, we are very pleased that the TUO programme as a whole has been recognised and has won the Guardian Newspaper’s Innovation Nation Award for Frontline Services (link to http://www.guardian.co.uk/innovation-nation-awards )  The TUO entry was judged the winner over three other shortlisted solutions from Dyfed Powys Police, North Yorkshire County Council and Staffordshire County Council. Innovation Nation is a partnership between Virgin Media Business and the Guardian, an initiative to uncover the most innovative businesses, public sector organisations and charities in the UK today.  Its aim is to showcase the ideas, the endeavour and the energy that are making things better in the areas of customer service, unique working practices, frontline government services and collaboration. Solidsoft have been involved with the Tell Us Once programme since its inception in 2007 and worked closely with the Department of Work and Pensions (DWP) to produce a business case for the programme. Teaming up with Atos (who host the application) Solidsoft delivered the first national solution in 2011 and a second phase in April 2012. Whilst currently restricted to distributing citizen data to central government organisations and local government authorities, DWP is now actively engaging with the private sector to see if TUO data can be disclosed to private sector organisations such as banks and building societies. Solidsoft welcome this expansion into the private sector where even more efficiencies will be realised. Mark Usher - Solidsoft Sales and Marketing Director For my part I’d like to say a big thank you to the Solidsoft Team, ATOS team and DWP team that made it happen.

    Read the article

  • Disable disk caches in AWS EBS for PostgreSQL?

    - by Alexandr Kurilin
    It's my understanding that, without correctly disabling OS-level and drive-level caching, there is a chance that in case of system failure the Write-Ahead Log might not be saved correctly and in fact might get corrupted, possibly preventing data recovery. I've already made sure that wal_sync_method=fdatasync however I was unable to make any configuration changes with hdparm since I get the following: $ sudo htparm -I /dev/xvdf /dev/xvdf: HDIO_DRIVE_CMD(identify) failed: Invalid argument Looks like that option is not available in the kind of setup you get in EC2. Am I missing anything here? Are there any other obvious caches I have to disable to ensure the WAL's safety?

    Read the article

  • How can I debug VNC screen repainting issues?

    - by stevecoh1
    I have what some might consider a trivial use for VNC, but I'd like to get it to work and it's technically interesting to me. My use case is that I'd like sometimes to be able to control my desktop from my living room while watching tv. The desktop runs Ubuntu, currently 12.04, but that may change soon. I'm using the default Vino server. I'd like to control it from my IPad and I have a nicely performing WiFi. I got the well-regarded (if reviews can be believed) app Vnc Viewer for the iPad. It's not working as well as I'd hoped. The problem is the speed of repainting. It's abysmally slow. I can click a close button, walk over to the desktop and see that the window has closed, but on the iPad, the VNC Client won't show the close for minutes if ever. I've noticed that CLOSING windows takes a lot longer to update than to open them. So the question is is this primarily client or server-caused? And if server-caused what can be done about it? Is Vino the best client or is something else better? Thanks

    Read the article

  • Strange robots.txt - how and why did it get there?

    - by Mick
    I recently created a very simple, pure HTML website which I have hosted with "hostmonster". Hostmonster had very good reviews on some comparison website and in general so far they appear to be perfectly good in every way... At least I thought so until just now... I have been making lots of edits to my site on an almost daily basis. My site now appears on the first page (7th on the list) for my most important keyphrase when doing a google search. But I did notice some problem with the snippet chosen by google. I asked a question on this site about snippets and got some great answers. I then made some modifications to my meta data and within 48hrs the google snippet for my search was perfect. The odd thing though was that looking at the "cached" version google had, it appeared that the cache was still very odl- like three weeks previous. This seemed very odd - how could it be that the google robots had read my new metadata without updating the cache? This puzzled me greatly. Just now it occurred to me that maybe I had some goofey setting in my robots.txt file. I didn't actually remember even making one - but I thought I'd have a look just in case. Much to my horror, I saw that there was a robots.txt and it contained the disturbing text below: sitemap: http://cdn.attracta.com/sitemap/728687.xml.gz Intuitively this looks like some kind of junk, spam trick, and I had indeed been getting some spam from "attracta". So my questions are: 1. Should I simply delete this robots.txt? 2. Was the file there all along - placed there because of some commercial tie-in between attracta and hostmonster. 3. Does the attracta robots file explain the lack of re-caching?

    Read the article

  • samba shares dissapear everynight

    - by Crash893
    I have ubuntu 8.04lts and recently a weird problem has been cropping up. every night something happens and in the morning my coworkers cant see the shares. If i try to remote into the machine via ssh i don't get a prompt . when i rebooted the machine i would get a "video cannot be displayed in this mode" screen and no other activity on the box. I booted from grub into recovery and tried doing a package repair (keeping my smb.conf) and that didn't seem to do anything after a few other reboots I was able to get it to come up (im not sure what i did) yesterday it did teh same thing i booted to recovery then did a repair xserver and it came right up so i thought that resovled the issue but then today same thing anyone have any idea on what i can look for (im very new to linux in general) worst case sennerio can i just reinstall ubuntu over again with out blowing out the data?

    Read the article

  • nagios wrongly reports packet loss

    - by Alien Life Form
    Lately, on my nagios 3.2.3 install (CentOS5, monitoring ~ 300 hosts, 1150 services) has sdtarted to occasionally report high packet loss on 50-60 hosts at a time. Problem is it's bogus. Manual runs of ping (or its own check_ping binary) finds no fault with any of the affected hosts. The only possible cures I found so far are: run all the checks manually (they will succeed but it may act up again on next check) acknowledge and wait for the problem to go away (may take several ours) I suspect (but have no particular reason other than single rescheduled checks succeeding) that the problem may lay with all the checks being mass scheduled together - in which case introducing some jitter in the scheduling (how?) might help. Or it may be something completely different. Ideas, anyone?

    Read the article

  • How can I discourage the use of Access?

    - by Greg Buehler
    Lets pretend that a very large company (revenue numbers with more than 8 figures) is looking to do a refresh on a software system, particularly the dashboard used by employees. This system was originally put together in the early 1990's to handle inventory tracking and storage across a variety of facilities (10+). Since this large company is now in the process of implementing some of these inventory processes with SAP they are in need of a major refresh. The existing system: Microsoft Access project performs dashboard duties Unique shipping/receiving configurations at different facilities require unique forms and queries within the Access project Uses 3rd party libraries referenced by Access to directly interface with at control system (read: motors, conveyors, and counters) Individual SQL Server 2000 instances (some traces of pre-update SQL Server 6.0 documents) at each facility The Issue: This system started as a home brewed inventory tracking scheme with a single internal sponsor who is still in charge of the technical direction. The original sponsor prescribing the desired deliverables that are being called for in the current RFP. The RFP describes a system based around a single Access project. Any suggestion that Access is ill suited for a project of this scope are shot down under the reasoning that "it works for the scope now". Are there any case studies, notices, or statements that can be used to disuade this potential customer from repeating their mistake? Does Microsoft make any statements directly about when it is highly recommended to ditch Access?

    Read the article

  • Ubuntu doesn't boot after adding a bootflag to the Windows partition

    - by Nils
    I have Ubuntu 10.10 installed on one (physical) hd and on the other one Windows. On both drives grub is installed to boot both operating systems. When I wanted to install SP1 for Win 7 I had to add a bootable flag to the partition from which Windows boots, otherwise the installation of SP1 does not work. I did so by booting into Ubuntu and using gparted to add this flag. After doing so the update for SP1 worked without any problems. When trying to boot back into Ubuntu grub complained that it couldn't find the kernel anymore! I tried to boot into a Ubuntu minimal cd and to restore grub using chroot, update-grub and grub-install which didn't work. I still had the problem that it was Unable to boot Ubuntu putting me in some minimal system called initramfs. It seems however that the uuid of the partitions changed. I guess this happened when I added the bootflag to the windows disk. Next thing I tried was to tell grub not to use the uuid for loading the kernel by uncommenting something in /etc/default/grub. Then I got the kernel booting but it suddenly stops (I guess when it is trying to mount the root file system) saying that the concerning uuid does not exist putting me into initramfs again. The strange thing is that there I coulnd't even manage to mount the root partition using /dev/sdb1 (on which it is in my case). I would be glad if there is a way to restore the system again without having to reinstall it.

    Read the article

  • How can I fix this configure error?

    - by balor123
    I'm trying to build mosh from source on a SUSE10 machine and am getting the following error: checking for protobuf... no configure: error: Package requirements (protobuf) were not met: No package 'protobuf' found Consider adjusting the PKG_CONFIG_PATH environment variable if you installed software in a non-standard prefix. Alternatively, you may set the environment variables protobuf_CFLAGS and protobuf_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details. I downloaded the source to protobuf and installed it in a custom path as well. I'm not using a package manager for any of this and cannot for various reasons outside the scope of the question. I added that custom path to my PATH and rehashed. Typically, this is enough for configure but in this case its not doing the trick. I added the prefix for protobuf to PKG_CONFIG_PATH but am still hitting this error. What should I do next to get past this error?

    Read the article

  • World Record Siebel PSPP Benchmark on SPARC T4 Servers

    - by Brian
    Oracle's SPARC T4 servers set a new World Record for Oracle's Siebel Platform Sizing and Performance Program (PSPP) benchmark suite. The result used Oracle's Siebel Customer Relationship Management (CRM) Industry Applications Release 8.1.1.4 and Oracle Database 11g Release 2 running Oracle Solaris on three SPARC T4-2 and two SPARC T4-1 servers. The SPARC T4 servers running the Siebel PSPP 8.1.1.4 workload which includes Siebel Call Center and Order Management System demonstrates impressive throughput performance of the SPARC T4 processor by achieving 29,000 users. This is the first Siebel PSPP 8.1.1.4 benchmark supporting 29,000 concurrent users with a rate of 239,748 Business Transactions/hour. The benchmark demonstrates vertical and horizontal scalability of Siebel CRM Release 8.1.1.4 on SPARC T4 servers. Performance Landscape Systems Txn/hr Users Call Center Order Management Response Times (sec) 1 x SPARC T4-1 (1 x SPARC T4 2.85 GHz) – Web 3 x SPARC T4-2 (2 x SPARC T4 2.85 GHz) – App/Gateway 1 x SPARC T4-1 (1 x SPARC T4 2.85 GHz) – DB 239,748 29,000 0.165 0.925 Oracle: Call Center + Order Management Transactions: 197,128 + 42,620 Users: 20300 + 8700 Configuration Summary Web Server Configuration: 1 x SPARC T4-1 server 1 x SPARC T4 processor, 2.85 GHz 128 GB memory Oracle Solaris 10 8/11 iPlanet Web Server 7 Application Server Configuration: 3 x SPARC T4-2 servers, each with 2 x SPARC T4 processor, 2.85 GHz 256 GB memory 3 x 300 GB SAS internal disks Oracle Solaris 10 8/11 Siebel CRM 8.1.1.5 SIA Database Server Configuration: 1 x SPARC T4-1 server 1 x SPARC T4 processor, 2.85 GHz 128 GB memory Oracle Solaris 11 11/11 Oracle Database 11g Release 2 (11.2.0.2) Storage Configuration: 1 x Sun Storage F5100 Flash Array 80 x 24 GB flash modules Benchmark Description Siebel 8.1 PSPP benchmark includes Call Center and Order Management: Siebel Financial Services Call Center – Provides the most complete solution for sales and service, allowing customer service and telesales representatives to provide superior customer support, improve customer loyalty, and increase revenues through cross-selling and up-selling. High-level description of the use cases tested: Incoming Call Creates Opportunity, Quote and Order and Incoming Call Creates Service Request . Three complex business transactions are executed simultaneously for specific number of concurrent users. The ratios of these 3 scenarios were 30%, 40%, 30% respectively, which together were totaling 70% of all transactions simulated in this benchmark. Between each user operation and the next one, the think time averaged approximately 10, 13, and 35 seconds respectively. Siebel Order Management – Oracle's Siebel Order Management allows employees such as salespeople and call center agents to create and manage quotes and orders through their entire life cycle. Siebel Order Management can be tightly integrated with back-office applications allowing users to perform tasks such as checking credit, confirming availability, and monitoring the fulfillment process. High-level description of the use cases tested: Order & Order Items Creation and Order Updates. Two complex Order Management transactions were executed simultaneously for specific number of concurrent users concurrently with aforementioned three Call Center scenarios above. The ratio of these 2 scenarios was 50% each, which together were totaling 30% of all transactions simulated in this benchmark. Between each user operation and the next one, the think time averaged approximately 20 and 67 seconds respectively. Key Points and Best Practices No processor cores or cache were activated or deactivated on the SPARC T-Series systems to achieve special benchmark effects. See Also Siebel White Papers SPARC T4-1 Server oracle.com OTN SPARC T4-2 Server oracle.com OTN Siebel CRM oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 30 September 2012.

    Read the article

  • Consuming Async SOA in a WebService Proxy By Anagha Desai

    - by JuergenKress
    Consider a scenario where an application is built using SOA Async processes and needs to be consumed in a WebService Proxy. In this blog, we will be demonstrating how to implement this use case. To achieve this, we will follow a two step process: Create an Async SOA BPEL process. Consume it in a WebService Proxy. Pre-requisite: Jdeveloper with SOA extension installed. Steps: To begin with step 1, create a SOA Application and name it SOA_AsyncApp. This invokes Create SOA Application wizard. In the wizard, choose composite with BPEL process in Step 3. Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Anagha Desai,Async SOA,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Open Windows Server 2012/Windows 8 Start Menu

    - by bmccleary
    I realize that Windows Server 2012 (and Windows 8) removed the start menu button and replaced it with moving your mouse to the upper right corner of the screen. This works fine when the desktop is full screen. However, I access all my servers through windowed RDP connections (or through the Hyper-V console window) and in this case, the desktop is not full screen. Therefore, in order to open the new "start" menu, I have to slowly move my mouse very carefully within the window to just a few pixels within top right corner of the window in order to open the menu. Also, because the session is windowed, the default hot keys (Windows + D, etc.) won't work. There has got to be an easier way. Has anyone else experienced this frustration?

    Read the article

  • Announcing Oracle Database Mobile Server 11gR2

    - by Eric Jensen
    I'm pleased to announce that Oracle Database Mobile Server 11gR2 has been released. It's available now for download by existing customers, or anyone who wants to try it out. New features include: Support for J2ME platforms, specifically CDC platforms including OJEC(this is in addition to our existing support for Java SE and SE Embedded) Per-application integration with Berkeley DB on Android Server-side support for Apache TomEE platform Adding support for Oracle Java Micro Edition Embedded Client (OJEC for short) is an important milestone for us; it enables Database Mobile Server to work with any of the incredibly wide array of devices that run J2ME. In particular, it enables management of  networks of embedded devices, AKA machine to machine (M2M) networks. As these types of networks become more common in areas like healthcare, automotive, and manufacturing, we're seeing demand for Database Mobile Server from new and different areas. This is in addition to our existing array of mobile device use cases. The Android integration feature with Berkeley DB represents the completion of phase I of our Android support plan, we now offer a full set of sync, device and app management features for that platform. Going forward, we plan to continue the dual-focus approach, supporting mobile platforms such as Android, and iOS (hint) on the one hand, and networks of embedded M2M devices on the other. In either case, Database Mobile Server continues to be the best way to connect data-driven applications to an Oracle backend.

    Read the article

  • Group policy applied to AD OU attributes

    - by Eric Smith
    I'm not well-versed in AD, so would like to resolve a question I have with regards to AD information. I understand that it is possible to apply group policy to OU's, thereby restricting access. What I'd like to know is, is it possible to do the same with OU attributes. Some context would help. There's a requirement to store address information in AD (IMO, a natural fit), but for various reasons, although obviously things like name should be globally accessible, access restrictions are desired on the address. In this case, is it possible to apply security to the address portion of the OU attributes, or does each address have to be broken into a separate OU (a solution that feels smelly given that address doesn't have identity)?

    Read the article

  • Puppet master/agent basic setup

    - by lewap
    I'm trying to setup a basic puppet agent/master use-case with an agent server and a master. I've setup two servers with puppet and puppet master respectively. After the following setup of both servers: puppet master --no-daemonize --verbose puppet agent --test puppet cert --list to get the list, puppet cert --sign to sign it. puppet agent --test I get the message: err: Could not retrieve catalog from remote server: hostname was not match with the server certificate warning: Not using cache on failed catalog err: Could not retrieve catalog; skipping run err: Could not send report: hostname was not match with the server certificate What do I need to do in order to get the agent/master to be able to talk to each other?

    Read the article

  • List of MD /Raid/LVM (Devices) = How to mount them without any further information available?

    - by Jens
    Hello Expets, I do not have much skills in linux and installed a system two years ago that I now had to reboot, but it seems I did not automate everything with start-scripts... My Problem: I miss some mountpoints. I have a list of my raids (excerpt:) md3 : active (auto-read-only) raid1 sda6[0] sdb6[1] 97659008 blocks [2/2] [UU] md4 : active (auto-read-only) raid1 sda7[0] sdb7[1] 250099776 blocks [2/2] [UU] and it seems md3 and md4 are NOT mounted. However i do NOT have any entries for them fstab file. What should I do next. I do NOT know which filesystem they have (most likely ext3). =Can I savely try to mount them with (mount -t ext3 /dev/md3 /mnt/mymntpoint) or will the lead to corrupted data, in case they are not ext3? What should I do next (based on the information given above). The goal is to remount these Devices again, but I do not know anything about them anymore... Thank you very much Jens

    Read the article

  • What is recommended minimum object size for gzip benefits?

    - by utt73
    I'm working on improving page speed display times, and one of the methods is to gzip content from the webserver. Google recommends: Note that gzipping is only beneficial for larger resources. Due to the overhead and latency of compression and decompression, you should only gzip files above a certain size threshold; we recommend a minimum range between 150 and 1000 bytes. Gzipping files below 150 bytes can actually make them larger. We serve our content through Akamai, using their network for a proxy and CDN. What they've told me: Following up on your question regarding what is the minimum size Akamai will compress the requested object when sending it to the end user: The minimum size is 860 bytes. My reply: What is the reason(s) for why Akamai's minimum size is 860 bytes? And why, for example, is this not the case for files Akamai serves for facebook? (see below) Google recommends to gzip more agressively. And that seems appropriate on our site where the most frequent hits, by far, are AJAX calls that are <860 bytes. Akamai's response: The reasons 860 bytes is the minimum size for compression is twofold: (1) The overhead of compressing an object under 860 bytes outweighs performance gain. (2) Objects under 860 bytes can be transmitted via a single packet anyway, so there isn't a compelling reason to compress them. So I'm here for some fact checking. Is the 860 byte limit due to packet size the end of this reasoning? Why would high traffic sites push this lower/closer to the 150 byte limit... just to save on bandwidth costs, or is there a performance gain in doing so?

    Read the article

< Previous Page | 583 584 585 586 587 588 589 590 591 592 593 594  | Next Page >