Search Results

Search found 29159 results on 1167 pages for 'xml configuration'.

Page 929/1167 | < Previous Page | 925 926 927 928 929 930 931 932 933 934 935 936  | Next Page >

  • How can I set up Friendly URL to Nginx?

    - by MKK
    I'm trying to use dokuwiki with its Friendly URL on Nginx. The problem that I'm facing is, it doesn' show correct path to any link(even stylesheet, and images) on every page It looks that paths are missing wiki/ part. If I click on the image and show its destination, it shows this url http://foo-sample.com/lib/tpl/dokuwiki/images/logo.png But it has to be this below. http://foo-sample.com/wiki/lib/tpl/dokuwiki/images/logo.png and login URL is not working either. If I click on login link, it takes me to http://foo-sample.com/wiki/start?do=login&sectok=ff7d4a68936033ed398a8b82ac9 and it says 404 Not Found I took a look at this https://www.dokuwiki.org/rewrite#nginx and tried as much as possible. However it still doesn't work. Here's my conf files. How can I fix this problem? dokuwiki is set in /usr/share/wiki /etc/nginx/conf.d/rails.conf upstream sample { ip_hash; server unix:/var/run/unicorn/unicorn_foo-sample.sock fail_timeout=0; } server { listen 80; server_name foo-sample.com; root /var/www/html/foo-sample/public; location /wiki { alias /usr/share/wiki; index doku.php; } location ~ ^/wiki.+\.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index doku.php; fastcgi_split_path_info ^/wiki(.+\.php)(.*)$; fastcgi_param SCRIPT_FILENAME /usr/share/wiki$fastcgi_script_name; include /etc/nginx/fastcgi_params; } } /usr/share/wiki/.htaccess ## Enable this to restrict editing to logged in users only ## You should disable Indexes and MultiViews either here or in the ## global config. Symlinks maybe needed for URL rewriting. #Options -Indexes -MultiViews +FollowSymLinks ## make sure nobody gets the htaccess files <Files ~ "^[\._]ht"> Order allow,deny Deny from all Satisfy All </Files> # Uncomment these rules if you want to have nice URLs using # $conf['userewrite'] = 1 - not needed for rewrite mode 2 # Not all installations will require the following line. If you do, # change "/dokuwiki" to the path to your dokuwiki directory relative # to your document root. # If you enable DokuWikis XML-RPC interface, you should consider to # restrict access to it over HTTPS only! Uncomment the following two # rules if your server setup allows HTTPS. RewriteCond %{HTTPS} !=on RewriteRule ^lib/exe/xmlrpc.php$ https://%{SERVER_NAME}%{REQUEST_URI} [L,R=301] <IfModule mod_geoip.c> GeoIPEnable On Order deny,allow deny from all SetEnvIf GEOIP_COUNTRY_CODE JP AllowCountry Allow from .googlebot.com Allow from .yahoo.net Allow from .msn.com Allow from env=AllowCountry </IfModule>

    Read the article

  • Problems using custom keyboard layout / symbols file

    - by January
    I have a custom xkb symbols file which looks as follows: // modify the basic German layout to have polish characters default partial alphanumeric_keys xkb_symbols "basic" { include "de(basic)" name[Group1]="Germany - with polish characters"; key <AD03> { [ e, E, eogonek, Eogonek ] }; key <AD09> { [ o, O, oacute, Oacute ] }; key <AC01> { [ a, A, aogonek, Aogonek ] }; key <AC02> { [ s, S, sacute, Sacute ] }; key <AD06> { [ z, Z, zabovedot, Zabovedot ] }; key <AB02> { [ x, X, zacute, Zacute ] }; key <AB03> { [ c, C, cacute, Cacute ] }; key <AB06> { [ n, N, nacute, Nacute ] }; }; The name of the file is depl. I copy the file to /usr/share/X11/xkb/symbols and it works with setxkbmap depl. However, I also tried to add the respective menu entries in the "Text Entry" customization. I have modified the file /usr/share/X11/xkb/rules/evdev.xml and added the following section: <layout> <configItem> <name>depl</name> <shortDescription>depl</shortDescription> <description>German (with Polish characters)</description> <languageList> <iso639Id>ger</iso639Id> </languageList> </configItem> </layout> I have then reconfigured the xkb data with sudo dpkg-reconfigure xkb-data. It works in as much as that the new layout appears as a viable option in the Text Entry dialog, it can be added to the list of dialogs and is visible in the application indicator: However, it does not work, the new symbols are not loaded. No errors are reported in /var/log/Xorg.0.log.

    Read the article

  • JavaOne Latin America 2011: Keynotes, Sessions, Hands-on Lab, Geek Bike Ride, etc.

    - by arungupta
    After a very successful JavaOne San Francisco, the first JavaOne on the road for 2011 is heading to Latin America next week. There are 59 sessions delivered by several rock star speakers and with 60% sessions delivered by the local community. There are strategy, technical and community keynotes. The community keynote on Thursday will particularly be lot of fun with appearances from Java Champions, JUG leaders, jHome, and several others. Also check out the Exhibitor Floor Plan and don't forget to Register! The complete session schedule gives an overview for the list of technical sessions and hands-on lab. There are several Java EE, GlassFish, and WebLogic sessions and are highlighted below: Tuesday, Dec 6 Oracle WebLogic Server XML-Free Programming: Java Server and Client Development without <> Java EE Application in Production: Tips and Tricks to achieve zero downtime Web Applications and Wicket Scala on GlassFish and Java EE 6 REST and Java best practices, issues and solutions for the Enterprise Building a RESTful Web Application with JAX-RS and Ext JS 4 Wednesday, Dec 7 Oracle GlassFish Server in the Virtual World JAX-RS 2.0: What's in JSR 339 ? JSF 343: What's coming in Java Message Service 2.0 ? The Great News of JSF 2.0! Thursday, Dec 8 Servlet 3.1 Update Develop, Deploy, and Monitor a Java EE 6 Application with Clustered GlassFish 3.1 Migrating from EJB/SOAP to REST with JAX-RS: The Case of the Central Bank of Brazil GlassFish REST Administration Back End: An Insider look at a real REST Application Scripting and Agile Java EE Applications with Jython And this is Brazil so a fun element is important. There are the usual Caiprihinas, Churrascaria, late night social dinners, community engagement, and multiple other fun activities. Fabiane Nardon and SOUJava gang are also organizing a Geek Bike Ride on the Sunday (Dec 4th) before JavaOne. The 20k ride (map) starts at 7am and goes through the streets of Sao Paulo. This is an opportunity to meet some of the JavaOne speakers and attendees outside the conference. They've even designed a t-shirt and 32 geeks have signed up so far. I'm glad my discussion with Fabiane during FISL early this year for arranging this bike ride is finally taking shape! I'm definitely looking forward to it and will be bringing nice fruity Odwalla bars for all the riders. Be there to ride with me and many others :-) Stay updated by following @oracledobrasil and @javaoneconf. I'll be there, will you ? Don't wait and register now! And in case you are interested in reading about the experience from last year ... it was lot of fun! Just check out a collage of pictures yourself ... And the complete album at:

    Read the article

  • EBS – ATG Webcast 9/11 - 9/12

    - by cwarticki
    EBS – ATG Webcast in September 2012 EBS – Multiple Language Support (MLS) Agenda :EBS is MLS Ready                                                                                 NLS / MLS Basic ArchitectureNLS / MLS InstallationNLS / MLS Configuration Settings                                                                    TroubleshootingQuestion and AnswersEMEA Session : September 11, 2012 at 09:00 UK / 10:00 CET / 13:30 India / 17:00 Japan / 18:00 Sydney (Australia) Details & Registration : Note 1480084.1 Direct link to register in WebEx US Session : September 12, 2012 at 18:00 UK / 19:00 CET / 10:00 AM Pacific / 11:00 AM Mountain/ 01:00 PM Eastern ·      Details & Registration : Note 1480085.1 ·      Direct link to register in WebEx ·         Schedules, recordings and the Presentations of the Advisor Webcast drove under the EBS Applications Technology area can be found in Note 1186338.1. ·         Current Schedules of Advisor Webcast for all Oracle Products can be found on Note 740966.1 ·         Post Presentation Recordings of the Advisor Webcasts for all Oracle Products can be found on Note 740964.1 If you have any question about the schedules or if you have a suggestion for an Advisor Webcast to be planned in future, please send an E-Mail to Ruediger Ziegler.

    Read the article

  • 1000baseT/Full Supported and Advertised but not working!

    - by user11973
    Hello, i'm using a AT3IONT-I motherboard with integrated card. If I ethtool it to 1000 full duplex it wont work! Here is sudo ethtool eth0: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised pause frame use: Symmetric Receive-only Advertised auto-negotiation: Yes Speed: 100Mb/s Duplex: Full Port: Twisted Pair PHYAD: 0 Transceiver: internal Auto-negotiation: on MDI-X: Unknown Supports Wake-on: pumbg Wake-on: g Current message level: 0x00000033 (51) Link detected: yes here is sudo lshw -C network: *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:04:00.0 logical name: eth0 version: 03 serial: bc:ae:c5:8b:7d:33 size: 100MB/s capacity: 1GB/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8168 driverversion=8.021.00-NAPI duplex=full ip=192.168.0.2 latency=0 link=yes multicast=yes port=twisted pair speed=100MB/s resources: irq:42 ioport:e800(size=256) memory:f8fff000-f8ffffff memory:f8ff8000-f8ffbfff memory:fbff0000-fbffffff And lspci -nn: 00:00.0 Host bridge [0600]: nVidia Corporation MCP79 Host Bridge [10de:0a82] (rev b1) 00:00.1 RAM memory [0500]: nVidia Corporation MCP79 Memory Controller [10de:0a88] (rev b1) 00:03.0 ISA bridge [0601]: nVidia Corporation MCP79 LPC Bridge [10de:0aad] (rev b3) 00:03.1 RAM memory [0500]: nVidia Corporation MCP79 Memory Controller [10de:0aa4] (rev b1) 00:03.2 SMBus [0c05]: nVidia Corporation MCP79 SMBus [10de:0aa2] (rev b1) 00:03.3 RAM memory [0500]: nVidia Corporation MCP79 Memory Controller [10de:0a89] (rev b1) 00:03.5 Co-processor [0b40]: nVidia Corporation MCP79 Co-processor [10de:0aa3] (rev b1) 00:04.0 USB Controller [0c03]: nVidia Corporation MCP79 OHCI USB 1.1 Controller [10de:0aa5] (rev b1) 00:04.1 USB Controller [0c03]: nVidia Corporation MCP79 EHCI USB 2.0 Controller [10de:0aa6] (rev b1) 00:06.0 USB Controller [0c03]: nVidia Corporation MCP79 OHCI USB 1.1 Controller [10de:0aa7] (rev b1) 00:06.1 USB Controller [0c03]: nVidia Corporation MCP79 EHCI USB 2.0 Controller [10de:0aa9] (rev b1) 00:08.0 Audio device [0403]: nVidia Corporation MCP79 High Definition Audio [10de:0ac0] (rev b1) 00:09.0 PCI bridge [0604]: nVidia Corporation MCP79 PCI Bridge [10de:0aab] (rev b1) 00:0b.0 RAID bus controller [0104]: nVidia Corporation MCP79 RAID Controller [10de:0abc] (rev b1) 00:0c.0 PCI bridge [0604]: nVidia Corporation MCP79 PCI Express Bridge [10de:0ac4] (rev b1) 00:10.0 PCI bridge [0604]: nVidia Corporation MCP79 PCI Express Bridge [10de:0aa0] (rev b1) 00:15.0 PCI bridge [0604]: nVidia Corporation MCP79 PCI Express Bridge [10de:0ac6] (rev b1) 03:00.0 VGA compatible controller [0300]: nVidia Corporation ION VGA [10de:087d] (rev b1) 04:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller [10ec:8168] (rev 03) If i use Code: sudo ethtool -s eth0 speed 1000 duplex full autoneg off then in ethtool speed is Unknown and it doesn't work; if I set it via pre-up it wont work either... Please help!! Thanks!

    Read the article

  • How are you coping with Ubuntu's Unity app launcher? (It auto-hides, can't minimize apps)

    - by Bad Learner
    [Firstly, let me tell you that this cannot be subjective in anyway, as I think at least Ubuntu beginners will have these questions boggling in their mind; and yes, this is a question that has a definite answer - - so, I am completely within the rules.] Okay, coming to the point, I see that Ubuntu uses Unity since v10.xx (netbook edition?) and carried the same to v11.04 & v11.10. As someone who's stuck to Windows for all these years, it's somewhat difficult to cope with Ubuntu's Unity, for the following reasons: [1] The Unity app launcher (to the screen's left) auto-hides when a window is maximized. [2]- And once launched, apps cannot be minimized by clicking the app's icon in the launcher. I have to go to the top-left of the screen and click the "_" button. I do know I can fix these issues by installing some configuration tool. But the thing is, if that's how it's meant to work, Canonical/Ubuntu would have designed it that way. But they didn't. Why? w.r.t above points [1], [2]: [1] EDITED: So, does it mean, it's good to work without maximizing the windows? Because if I maximize the window, the app launcher hides. And I need to hover the mouse to the left of the screen, wait a bit (even if it's a sec or even less, I can still feel the lag), and then click on the next app icon in the launcher to switch to it. I do know, I can use Alt+TAB to switch, but I am not sure which window comes next. This, I feel, isn't productive. Also, this makes me feel, Ubuntu is designed for large screens (it's nice on my 1920x1080p screen), because I can have two windows side-by-side or something like that on a large screen. This is not possible on smaller screens. [2]- Being able to minimize an application's window by clicking on its icon in the launcher (just like it works on Windows & probably elsewhere) would have been great, rather than having to go to the top-left and clicking the _ (minimize) button which brings up the app launcher itself (from hiding) most of the time. It's too tiring to have these small issues in the UI. I really would like to know how you are coping with these issues the way they are?

    Read the article

  • C# Dev - I've tried Lisps, but I don't get it.

    - by Jonathan Mitchem
    After a few months of learning about and playing with lisps, both CL and a bit of Clojure, I'm still not seeing a compelling reason to write anything in it instead of C#. I would really like some compelling reasons, or for someone to point out that I'm missing something really big. The strengths of a Lisp (per my research): Compact, expressive notation - More so than C#, yes... but I seem to be able to express those ideas in C# too. Implicit support for functional programming - C# with LINQ extension methods: mapcar = .Select( lambda ) mapcan = .Select( lambda ).Aggregate( (a,b) = a.Union(b) ) car/first = .First() cdr/rest = .Skip(1) .... etc. Lambda and higher-order function support - C# has this, and the syntax is arguably simpler: "(lambda (x) ( body ))" versus "x = ( body )" "#(" with "%", "%1", "%2" is nice in Clojure Method dispatch separated from the objects - C# has this through extension methods Multimethod dispatch - C# does not have this natively, but I could implement it as a function call in a few hours Code is Data (and Macros) - Maybe I haven't "gotten" macros, but I haven't seen a single example where the idea of a macro couldn't be implemented as a function; it doesn't change the "language", but I'm not sure that's a strength DSLs - Can only do it through function composition... but it works Untyped "exploratory" programming - for structs/classes, C#'s autoproperties and "object" work quite well, and you can easily escalate into stronger typing as you go along Runs on non-Windows hardware - Yeah, so? Outside of college, I've only known one person who doesn't run Windows at home, or at least a VM of Windows on *nix/Mac. (Then again, maybe this is more important than I thought and I've just been brainwashed...) The REPL for bottom-up design - Ok, I admit this is really really nice, and I miss it in C#. Things I'm missing in a Lisp (due to a mix of C#, .NET, Visual Studio, Resharper): Namespaces. Even with static methods, I like to tie them to a "class" to categorize their context (Clojure seems to have this, CL doesn't seem to.) Great compile and design-time support the type system allows me to determine "correctness" of the datastructures I pass around anything misspelled is underlined realtime; I don't have to wait until runtime to know code improvements (such as using an FP approach instead of an imperative one) are autosuggested GUI development tools: WinForms and WPF (I know Clojure has access to the Java GUI libraries, but they're entirely foreign to me.) GUI Debugging tools: breakpoints, step-in, step-over, value inspectors (text, xml, custom), watches, debug-by-thread, conditional breakpoints, call-stack window with the ability to jump to the code at any level in the stack (To be fair, my stint with Emacs+Slime seemed to provide some of this, but I'm partial to the VS GUI-driven approach) I really like the hype surrounding Lisps and I gave it a chance. But is there anything I can do in a Lisp that I can't do as well in C#? It might be a bit more verbose in C#, but I also have autocomplete. What am I missing? Why should I use Clojure/CL?

    Read the article

  • Application Integration Future &ndash; BizTalk Server &amp; Windows Azure (TechEd 2012)

    - by SURESH GIRIRAJAN
    I am really excited to see lot of new news around BizTalk in TechEd 2012. I was recently watching the session presented by Sriram and Rajesh on “Application Integration Futures: The Road Map and What's Next on Windows Azure”. It was great session and lot of interesting stuff about the feature updates for BizTalk and Azure integration. I have highlighted them below, definitely customers who haven’t started using Microsoft BizTalk ESB Toolkit should start using them which is going to be part of the core BizTalk product in future release, which is cool… BizTalk Server feature enhancements Manageability: ESB Tool Kit is going to be part of the core BizTalk product and Setup. Visualize BizTalk artifact dependencies in BizTalk administration console. HIS administration using configuration files. Performance: Improvements in ordered send port scenarios Improved performance in dynamic send ports and ESB, also to configure BizTalk host handler for dynamic send ports. Right now it runs under default host, which does not enable to scale. MLLP adapter enhancements and DB2 client transaction load balancing / client bulk insert. Platform Support: Visual Studio 2012, Windows 8 Server, SQL Server 2012, Office 15 and System Center 2012. B2B enhancements to support latest industry standards natively. Connectivity Improvements: Consume REST service directly in BizTalk SharePoint integration made easier. Improvements to SMTP adapter, to add macros for sending same email with different content to different parties. Connectivity to Azure Service Bus relay, queues and topics. DB2 client connectivity to SQL Server and SQL Server connectivity to Informix. CICS Http client connectivity to Windows. Azure: Use Azure as IaaS/PaaS for BizTalk environment. Use Azure to provision BizTalk environment for test environment / development. Later move to On-premises or build a Hybrid cloud approach. Eliminate HW procurement for BizTalk environment for testing / demos / development etc. Enable to create BizTalk farm easily and remove/add more servers as needed. EAI Service: EAI Bridge Protocol transformation Message Transformation Running custom code Message Enrichment Hybrid Connectivity LOB Applications On-premises Application On-premises Connectivity to Applications in the cloud Queues/ Topics Ftp Devices Web Services Bridge can be customized based on the service needs to provide different capabilities needed as part of the bridge. Look at the sample for EDI bridge for EDI service sample. Also with Tracking enabled through the portal. http://msdn.microsoft.com/en-us/library/windowsazure/hh674490 Adapters: Service Bus Messaging adapter - New adapter added. WebHttp adapter - For REST services. NetTcpRelay adapter - New adapter added. I will start posting more and once I start playing with this…

    Read the article

  • Enterprise Manager 12c: New DSS Demos Available

    - by Javier Puerta
    Enterprise Manager Cloud Control 12c Application Replay Demo Now Available! User Experience Monitoring with Enterprise Manager Cloud Control 12c and Real User Experience Insight 12R1 Now Available! Oracle Enterprise Manager Cloud Control 12c: Database Management Packs demo upgrade     Enterprise Manager Cloud Control 12c Application Replay Demo Now Available! We are pleased to announce the availability of the Oracle Application Replay demo that showcases some of the key capabilities of performing realistic, production scale testing of your web and packaged Oracle applications. This demo specifically focuses on capturing production web traffic from an E-Business Suite application and replaying the captured workload on a test E-Business Suite application to assess the impact of an application infrastructure change on the workload. The target audiences are application developers, quality assurance teams, IT managers and production control staff that deal in day-to-day change management activities and trouble shooting of production environments. Demo Highlights: Enterprise Manager 12c workflows for capturing application workload Seamless integration of Application Replay with Real User Experience Insight for application workload capture Enterprise Manager 12c centralized workflows for replaying captured application workloads in a test environment Demonstrates how to minimize risk when deploying a complex EBusiness Suite application infrastructure change. Rich reporting capability for performance analysis and problem detection User Experience Monitoring with Enterprise Manager Cloud Control 12c and Real User Experience Insight 12R1 Now Available! We are pleased to announce the availability of the Oracle Real User Experience Insight demo that showcases some of the key capabilities of user experience monitoring. This demo specifically focuses on business reporting, integrated performance diagnostics, tracking of customer journey’s through RUEI’s userflow tracking capabilities and it’s Key Performance Indicators tracking and configuration. Demo Highlights: Application-centric dashboard Integration with Oracle Enterprise Manager 12c – JVMD, ADP and BTM Session diagnostics and user session replay Monitoring through “Key Performance Indicators” (KPI) --- create alerts/incidents FUSION Application centric dashboards & integrated BI Oracle Enterprise Manager Cloud Control 12c: Database Management Packs demo upgrade DSS is pleased to announce an upgrade to the Oracle Enterprise Manager Cloud Control 12c: Database Management Packs demo. While retaining the content from the initial release of the demo—Diagnostic and Tuning Packs, Test Data Management and Data Masking, and Real Application Testing—the demo now includes a new Data Masking for Real Application Testing scenario. Demo Features: Diagnostic and Tuning Packs SQL Performance Analyzer Database Replay Data Masking Masking Real Application Testing workloads Testing pending Optimizer statistics Test Data Management

    Read the article

  • Was I wrong about JavaScript?

    - by jboyer
    Yes, I was. Recently, I’ve taken a good hard look at JavaScript. I’ve used it before but mostly in the capacity of web design. Using JQuery to make your web page do cool stuff is different than really creating a JavaScript application using all of the language constructs. What I’m finding as I use it more is that I may have been wrong about my assumptions about it. Let me explain.   I enjoyed doing cool stuff with JQuery but the limited experience with JavaScript as a language coupled with the bad things that I heard about it led me to not have any real interest in it. However, JavaScript is ubiquitous on the web and if I want to do any web development, which I do, I need to learn it. So here I am, diving deep into the language with the help of the JavaScript Fundamentals training course at Pluralsight (great training for a low price) and the JavaScript: The Good Parts book by Douglas Crockford.   Now, there are certainly parts of JavaScript that are bad. I think these are well known by any developer that uses it. The parts that I feel are especially egregious are the following: The global object null vs. undefined truthy and falsy limited (nearly nonexistent) scoping ‘==’ and ‘===’ (I just don’t get the reason for coercion)   However, what I am finding hiding under the covers of the bad things is a good language. I am finding that I am legitimately enjoying JavaScript. This I was not expecting. I’m not going to go into a huge dissertation on what I like about it, but some things include: Object literal notation dynamic typing functional style (JavaScript: The Good Parts describes it as LISP in C clothing) JSON (better than XML) There are parts of JavaScript that seem strange to OOP developers like myself. However, just because it is different or seems strange does not mean it is bad. Some differences are quite interesting and useful.   I feel that it is important for developers to challenge their assumptions and also to be able to admit when they are wrong on a topic. Many different situations can arise that lead to this, such as choosing the wrong technology for a problem’s solution, misunderstanding the requirements, etc. I decided to challenge my assumptions about JavaScript instead of moving straight into CoffeeScript or Dart. After exploring it, I find that I am beginning to enjoy it the more I use it. As long as there are those like Crockford to help guide me in the right way to code in JavaScript, I can create elegant and efficient solutions to problems and add another ‘arrow’ to the ‘quiver’, so to speak. I do still intend to learn CoffeeScript to see what the hub-bub is about, but now I no longer have to be afraid of JavaScript as a legitimate programming language.   Has something similar ever happened to you? Tell me about it in the comments below.

    Read the article

  • Where I missed boot.properties.?

    - by Dyade, Shailesh M
    Today one of my customer was trying to start the WebLogic Server ( Production Instance) , though he was trying to start the server in a standard way, but it was failing due to below error :   ####<Oct 22, 2012 12:14:43 PM BST> <Warning> <Security> <BanifB1> <> <main> <> <> <> <1350904483998> <BEA-090066> <Problem handling boot identity. The following exception was generated: weblogic.security.internal.encryption.EncryptionServiceException: weblogic.security.internal.encryption.EncryptionServiceException: [Security:090219]Error decrypting Secret Key java.security.ProviderException: setSeed() failed> And it started failing into below causes. ####<Oct 22, 2012 12:16:45 PM BST> <Critical> <WebLogicServer> <BanifB1> <AdminServer> <main> <<WLS Kernel>> <> <> <1350904605837> <BEA-000386> <Server subsystem failed. Reason: java.lang.AssertionError: java.lang.reflect.InvocationTargetException java.lang.AssertionError: java.lang.reflect.InvocationTargetException weblogic.security.internal.encryption.EncryptionServiceException: weblogic.security.internal.encryption.EncryptionServiceException: [Security:090219]Error decrypting Secret Key java.security.ProviderException: setSeed() failed weblogic.security.internal.encryption.EncryptionServiceException: [Security:090219]Error decrypting Secret Key java.security.ProviderException: setSeed() failed at weblogic.security.internal.encryption.JSafeSecretKeyEncryptor.decryptSecretKey(JSafeSecretKeyEncryptor.java:121) Customer was facing this issue without any changes in the system, it was stable suddenly started seeing this issue last night. When we checked, customer was manually entering the username and password, config.xml had the entries encrypted However when verified, customer had the boot.properties at the Servers/AdminServer/security folder and DomainName/security didn't have this file. Adding boot.properies fixed the issue. Regards Shailesh Dyade 

    Read the article

  • Ubuntu 12.04 x64 LTS VPN Server not changing IP

    - by user288778
    I used this guide http://silverlinux.blogspot.co.uk/2012/05/how-to-pptp-vpn-on-ubuntu-1204-pptpd.html and it worked fine. I'm able to connect but the problem is, that my IP being changed to "localip" not "remote ip". This is what I get from tail -f /var/log/syslog [code] June 6 00:09:19 instant5860 NetworkManager[1456]: Unmanaged Device found; state CONNECTED forced (see http://bugs.launchpad.net/bugs/191889) June 6 00:09:19 instant5860 NetworkManager[1456]: Marking connection 'Wired connection 1' invalid. June 6 00:09:19 instant5860 NetworkManager[1456]: Activation (eth1) failed. June 6 00:09:19 instant5860 NetworkManager[1456]: Activation (eth1) Stage 4 of 5 (IPv4 Configure Timeout) complete. June 6 00:09:19 instant5860 NetworkManager[1456]: (eth1): device state change: failed - disconnected (reason 'none') [120 30 0] June 6 00:09:19 instant5860 NetworkManager[1456]: (eth1): deactivating device (reason 'none') [0] June 6 00:09:19 instant5860 NetworkManager[1456]: Unmanaged Device found; state CONNECTED forced. June----- avahi-daemon[440]: Withdrawing address record for fe80......... on eth1 Jun------avahi-daemon[440]: Leaving mDNS multicast group on interface eth1. IPv6 with address fe80..... Jun------avahi-daemon[440]: Interface eth1.IPv6 no longer relevant for mDNS. Jun------avahi-daemon[440]: Joining mDNS multicast group on interface eth1.IPv6 with address fe80.... Jun------avahi-daemon[440]: New relevant interface eth1.IPv6 for mDNS Jun------avahi-daemon[440]: Registering new address record for fe80..... on eth1.*. Jun - snmpd[1172]: error on subcontainer 'ia_addr' insert (-1) dbusp382]: [syste] Activating service name='org.freedesktop.PackageKit' (using servicehelper) AptDaemon: INFO: Initializing daemon AptDaemon.PackageKit: INFO: Initializing PackageKit compat layer dbus[382]: [system] Successfu;;y activated service 'org.freedesktop.PackageKit' AptDaemon.PackageKit: INFO: Initializing PackageKit transaction AptDaemon.Worker: INFO: Simulating trans: /org/debian/apt/transaction/233beca013a0473ea34d9dea805af5df AptDaemon.Worker: INFO: Processing transaction /org/debian/apt... AptDaemon.PackageKit: INFO: Get updates() AptDaemon.Worker: INFO: Finished snmpd[1172]: error on subcontainer pptpd[23611]: CTRL: Client 82.33.... control connection started pptpd[23611]: CTRL: Starting call (launching pppd, opening GRE) pptpd[23611]: pppd 2.4.5 started by root uid 0 pptpd[23611]: Using interface ppp0 pptpd[23611]: Connect ppp0 <-- /dev/pts/1 NetworkManager[1456]: SCPlugin - Ifupdown: device added (path: /sys/devices/virtual/net/ppp0, iface: ppp0) NetworkManager[1456]:SCPlugin - Ifupdown: device added (path: /sys/devices/virtual/net/ppp0, iface: ppp0): no ifupdown configuration found. pptpd[23612]: peer from calling number 82... authorized. kernel: [2918261.416923] init: ufw pre-start process (23613) terminated with status 1 dhclient: DHCPDISCOVER on eth1 to 255.255.255.255 port 67 interval 7 CTRL: Ignored a SET LING info packet with real ACCMs! local IP address:109.0.121.197 remote IP address: 109.0.84.56 dhclient: DHCPDISCOVER on eth1 to 255.255.255.255 port 67 interval 13 NetworkManager[1456]: (eth1): DHCPv4 request timed out. NetworkManager[1456]: (eth1): canceled DHCP transaction, DHCP client pid 23280 NetworkManager[1456]: Activation (eth1) Stage 4 of 5 (IPv4 Configure Timeout) scheduled... NetworkManager[1456]: Activation (eth1) Stage 4 of 5 (IPv4 Configure Timeout) started... NetworkManager[1456]: (eth1): device state change: ip-config - failed (reason 'ip-config-unavailable') [70 120 5[ NetworkManager[1456]: Unmanaged 'ia_addr' insert (-1)[/code]

    Read the article

  • Unity Dash and top toolbar won't open after updating to 12.10

    - by pgrytdal
    Today, I updated to Ubuntu 12.10. After re-starting, like the updater suggested, the toolbar on the top of the screen, and the dash won't load. I seem to be missing other features, as well, like alttab to switch windows, etc. I am able to access the Terminal, by typing CtrlAltT, which is how I was bale to access Firefox. How do I fix this problem? Edit: 2:10 PM on 10/19/12 As Chris Carter suggested, I'm including the results of the teminal command lspci (Sorry... I dont know how to format between Back-tics): 00:00.0 Host bridge: Advanced Micro Devices [AMD] RS780 Host Bridge 00:01.0 PCI bridge: Acer Incorporated [ALI] AMD RS780/RS880 PCI to PCI bridge (int gfx) 00:04.0 PCI bridge: Advanced Micro Devices [AMD] RS780/RS880 PCI to PCI bridge (PCIE port 0) 00:06.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (PCIE port 2) 00:07.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (PCIE port 3) 00:11.0 SATA controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 SATA Controller [AHCI mode] 00:12.0 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB OHCI0 Controller 00:12.1 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0 USB OHCI1 Controller 00:12.2 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB EHCI Controller 00:13.0 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB OHCI0 Controller 00:13.2 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB EHCI Controller 00:14.0 SMBus: Advanced Micro Devices [AMD] nee ATI SBx00 SMBus Controller (rev 3a) 00:14.2 Audio device: Advanced Micro Devices [AMD] nee ATI SBx00 Azalia (Intel HDA) 00:14.3 ISA bridge: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 LPC host controller 00:14.4 PCI bridge: Advanced Micro Devices [AMD] nee ATI SBx00 PCI to PCI Bridge 00:18.0 Host bridge: Advanced Micro Devices [AMD] Family 11h Processor HyperTransport Configuration (rev 40) 00:18.1 Host bridge: Advanced Micro Devices [AMD] Family 11h Processor Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] Family 11h Processor DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] Family 11h Processor Miscellaneous Control 00:18.4 Host bridge: Advanced Micro Devices [AMD] Family 11h Processor Link Control 01:05.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI RS780M/RS780MN [Mobility Radeon HD 3200 Graphics] 01:05.1 Audio device: Advanced Micro Devices [AMD] nee ATI RS780 HDMI Audio [Radeon HD 3000-3300 Series] 03:00.0 Ethernet controller: Broadcom Corporation NetLink BCM5784M Gigabit Ethernet PCIe (rev 10) 09:00.0 Network controller: Atheros Communications Inc. AR928X Wireless Network Adapter (PCI-Express) (rev 01)

    Read the article

  • What shall I include in a 10 week web technologies course?

    - by Iain
    In September I will be teaching a university module on web technologies. This session will be available to 1st year (freshman) students who don't necessarily have any programming knowledge or know how the web works. In the 2nd semester I will be teaching Flash, which is my specialism, so I know exactly what I am going to teach, but in the 1st semester I will be teaching them web standards technologies - HTML, CSS, JS, jQuery, PHP and MySQL. Where I need advice is how to proportion the emphasis for each part, and which parts of each technology to cover. Another real issue I'm struggling with is how much of the bad old ways should I teach them? Do they need to know about bold as well as strong, etc. UPDATE: based, on your feedback I will only be teaching the latest version of everything - CSS3, HTML5 etc. I'm not sure exactly how long the semester will be but I'm guessing about 10-12 weeks. Each session is a 2 hour lab. Obviously there's only so much I can cover in that time and it will be up to the students to go a research this stuff properly on W3 schools etc. My ideas so far were: Lesson 0 - Course intro and overview of the current tech landscape. What is out there, what will we be learning, what won't we. What is a web server, URL etc. Looking at different example websites and discussing how they work. Lesson 1 - HTML basics (head, body, title, img, table, a, lists, h1, strong etc) Lesson 2 - CSS for styling and layout - fonts, webfonts, float etc Lesson 3 - Intro to programming JS (variables, loops, conditionals, functions) Lesson 4 - more JS programming fundamentals, DOM manipulation Lesson 5 - jQuery - making things fly about and look cool Lesson 6 - XML and Ajax Lesson 7 - PHP basics - syntax, server-side principles Lesson 8 - PHP and MySQL - forms, logins, saving user info Lesson 9 - don't know Lesson 10 - don't know Please let me know if you think this is the right order, what have I missed, how to use any spare sessions etc. Thanks :) UPDATE BASED ON RESPONSES: Thanks for all your responses - some great stuff. To be absolutely clear, this is not a computer science course, it is a practical module on a creative technology course. The emphasis definitely has to be on making cool things work rather than understanding how the backbone of the internet works. That can come later, if the students are interested. At the end of the module I would like the students to be able to produce a web page or pages that does something cool, using some or all of the technologies I cover. Many of these topics are of course far beyond the scope of a 2 hour session, however I do not have the option of reducing the syllabus, I will just have to explain what the technology does and encourage the student to research it in their own time.

    Read the article

  • Making A Photo-Sharing App For Android In Eclipse [on hold]

    - by user3694394
    I've only just started developing mobile apps, which is something that I've been wanting to learn for a while now. I'm from an indie games studio, making PC games for around the last 3 years, and I finally decided to move into android app development. The only problem I'm having is that I don't know where to start. The project which I'm aiming to create will be something similar to Instagram, basically a photo-sharing app which allows users to take new photos, or pull them from their device, and add filters to them, before posting them. I have a rough idea of how I could go about doing this, but I need pointing towards any tutorials available for each specific step. So, here's my idea: Create a UI in eclipse (this wont be a problem for me, I should be able to do this fine through xml files) Setup a server-side database to store all user info and uploaded images (the images will need to be converted into byte arrays, and I have no idea how to do this through a database). My best idea would be to use a MySQL database to do this. Add user interactions (likes, favourites, reposts, etc.). This would, again, have to be stored in the database (or, at least, i think it would). Add the ability to take new photos using the phone's camera (I can probably do this anyway, using the Camera API). Add the ability to pull existing photos from the device (again, pretty simple to do). Add the ability to add filters to any photos (I had a look around, and there are some repos and resources which allow you to do this, but they're mainly for iOS development). Add facebook/twitter integration (possibly) to allow phots to be shared to other social networks. Create a news feed which shows users all of the latest photos from their friends, and allows them to post their own images to it. Give all registered users their own wall/page which has their latest posts/images displayed on it. Add the ability to allow users to follow other users, and display their followed users posts on their news feed. Yep. It's not going to be easy, and I don't even know if it's possible for me to do alone in Eclipse. However, this is the plan, and I'm going to do my best to learn everything I need to know to do this successfully. My actual question would be how should I start doing this- where do I begin learning how to do all of this? I've had a look at snapify, which can be edited via Parse, but I won't be spending hundreds of dollars (since I'm 15 and just don't have the available funds) on software. I have extensive knowledge of Java (again, I've been making games for around 3 years, mainly in Java), and various scripting languages. So, hopefully, this will be of some use here. Thanks in advance, Josh.

    Read the article

  • Support ARMv7 instruction set in Windows Embedded Compact applications

    - by Valter Minute
    On of the most interesting new features of Windows Embedded Compact 7 is support for the ARMv5, ARMv6 and ARMv7 instruction sets instead of the ARMv4 “generic” support provided by the previous releases. This means that code build for Windows Embedded Compact 7 can leverage features (like the FPU unit for ARMv6 and v7) and instructions of the recent ARM cores and improve their performances. Those improvements are noticeable in graphics, floating point calculation and data processing. The ARMv7 instruction set is supported by the latest Cortex-A8, A9 and A15 processor families. Those processor are currently used in tablets, smartphones, in-car navigation systems and provide a great amount of processing power and a low amount of electric power making them very interesting for portable device but also for any kind of device that requires a rich user interface, processing power, connectivity and has to keep its power consumption low. The bad news is that the compiler provided with Visual Studio 2008 does not provide support for ARMv7, building native applications using just the ARMv4 instruction set. Porting a Visual Studio “Smart Device” native C/C++ project to Platform Builder is not easy and you’ll lack many of the features that the VS2008 application development environment provides. You’ll also need access to the BSP and OSDesign configuration for your device to be able to build and debug your application inside Platform Builder and this may prevent independent software vendors from using the new compiler to improve their applications performances. Adeneo Embedded now provides a whitepaper and a Visual Studio plug-in that allows usage of the new ARMv7 enabled compiler to build applications inside Visual Studio 2008. I worked on the whitepaper and the tools, with the help of my colleagues and now the results can be downloaded from Adeneo Embedded’s website: http://www.adeneo-embedded.com/OS-Technologies/Windows-Embedded (Click on the “WEC7 ARMv7 Whitepaper tab to access the download links, free registration required) A very basic benchmark showed a very good performance improvement in integer and floating-point operations. Obviously your mileage may vary and we can’t promise the same amount of improvement on any application, but with a small effort on your side (even smaller if you use the plug-in) you can try on your own application. ARMv7 support is provided using Platform Builder’s compiler and VS2008 application debugger is not able to debut ARMv7 code, so you may need to put in place some workaround like keeping ARMv4 code for debugging etc.

    Read the article

  • Is there an API for determining congressional districts?

    - by ardavis
    I'm looking to determine the congressional district based on an address my user is providing. This will avoid having the user to look it up themselves. Does an API of this sort exist? Note Through my attempts to find one, I've only come across these: http://www.govtrack.us/developers/api (not sure how to submit an an address or zip code however) The following resources are available in the API ...Bills and resolutions in the U.S. Congress since 1973 (the 93rd Congress). ...A (bill, person) pair indicating cosponsorship, with join and withdrawn dates. ...Members of Congress and U.S. Presidents since the founding of the nation. ...Terms held in office by Members of Congress and U.S. Presidents. Each term corresponds with an election, meaning each term in the House covers two years (one 'Congress'), as President four years, and in the Senate six years (three 'Congresses'). ...Roll call votes in the U.S. Congress since 1789. How people voted is accessed through the Vote_voter API. ...How people voted on roll call votes in the U.S. Congress since 1789. See the Vote API. Filter on the vote field to get the results of a particular vote... http://www.opencongress.org/api (seems to be a way to find congress information, but not districts) This API provides programmers with structured access to all the data on OpenCongress, everything from official bill info to news and blog coverage to user-generated votes on bills and much more... This API defaults to returning XML. All queries can also return JSON... https://groups.google.com/forum/?fromgroups=#!topic/opendems-discuss/CeKyi_aANaE (similar question, no resolution) I've been looking over Open Dems, and seeing what's exposed at this point and what isn't. I work with Democrats Abroad, and am interested in using stuff from the lab for their sites. I quickly looked over the Precinct API, which does both more and less than what I'd need. An ideal resource would be any way of translating addresses into CD at the very least (getting state district data would be good as well), since that would make it easier for DA's membership to make a difference in races like last month's NY26 race... Update I'm looking at the source for the govtrack.us website and the 'doGeoCode' function may be useful. view-source:http://www.govtrack.us/congress/members If no one has any suggestions, I will try to go off of what they are doing.

    Read the article

  • Dynamic Bind9 + DHCP

    - by AcidRod75
    i have been working on setup a server for my internal network, so far i have a working isc-dhcp-server that can upgrade a chrooted BIND9 (on the same machine), i need to add some static entries on the DNS, so users can resolve the websites that resides in our DMZ. What i had tryed all ready was to modify the /etc/bind/named.conf.local with this info: // // Do any local configuration here // // Consider adding the 1918 zones here, if they are not used in your // organization //include "/etc/bind/zones.rfc1918"; key DHCP_UPDATER { algorithm HMAC-MD5.SIG-ALG.REG.INT; secret "MySuperSecretHash"; (this is not the real value BTW) }; zone "quality.internal" IN { type master; file "/var/lib/bind/quality.internal.db"; allow-update { key DHCP_UPDATER; }; }; zone "0.10.10.in-addr.arpa" { type master; file "/var/lib/bind/rev.10.10.0.in-addr.arpa"; allow-update { key DHCP_UPDATER; }; }; logging { channel query.log { file "/var/log/named/query.log"; severity debug 3; }; category queries { query.log; }; }; --- EOF ---- then i added this 2 entries: zone "ourserver.internal" IN { type master; file "/var/lib/bind/ourserver.internal.db"; }; zone "0.16.172.in-addr.arpa" { type master; file "/var/lib/bind/rev.172.16.0.in-addr.arpa"; }; ---- EOF ---- So.. i created the files ourserver.internal.db and rev.172.16.0.in-addr.arpa placed them BOTH in /var/lib/bind/ and changed the permisions so the bind user can access them, restated the service... when i do a NSLOOKUP www.ourserver.internal i get: Server: 127.0.0.1 Address: 127.0.0.1#53 ** server can't find www.ourserver.internal: NXDOMAIN BUT when i do a reverse lookup.... Server: 127.0.0.1 Address: 127.0.0.1#53 5.0.16.172.in-addr.arpa name = www.ourserver.internal I do not understand what's wrong. Some help with this will save me from installing a new DNS server at the DMZ JUST to host internal site names- TY in advance BTW: the server i'm using has Ubuntu Server 11.10 fully patched.

    Read the article

  • Nvidia GT218 repository drivers don't work

    - by user1042840
    I upgraded all packages with sudo apt-get upgrade command on my Ubuntu 10.04 box and I have Ubuntu 12.04 3.2.0-29-generic-pae now. I have two monitors and the following GPU: 01:00.0 VGA compatible controller: NVIDIA Corporation GT218 [NVS 300] (rev a2) After upgrading to 12.04, I somehow lost my previous setup with one common workspace stretched across two monitors. When Ubuntu starts only one monitor is on. I can see the message on the active monitor: Not optimum mode. Recommended mode: 1680x1050 60Hz I used Nvidia proprietary drivers on 10.04 but now jockey-text --list shows: xorg:nvidia_current - NVIDIA accelerated graphics driver (Proprietary, Disabled, Not in use) xorg:nvidia_current_updates - NVIDIA accelerated graphics driver (post-release updates) (Proprietary, Enabled, Not in use) When I run sudo nvidia-settings it says You do not appear to be using the NVIDIA X driver. Please edit your X configuration file (just run `nvidia-xconfig` as root), and restart the X server.' I typed nvidia-xconfig and rebooted, but jockey-text --list says the same after the reboot: Not in use. The same with nvidia-current - Enabled but Not in use. I also tried nvidia-173 but I ended up in tty immediately at startup so I removed it. I used to have some problems with Nvidia proprietary drivers on 10.04, I had to put paths to EDID files in /etc/X11/xorg.conf explicitly, but the resolution was as recommended and both monitors were working. If I understand correctly, nouveau drivers are used now by default because the resolution is still quite high, definitely not 800x600, xrandr showed: xrandr: Failed to get size of gamma for output default Screen 0: minimum 320 x 400, current 1600 x 1200, maximum 1600 x 1200 default connected 1600x1200+0+0 0mm x 0mm 1600x1200 66.0* 1280x1024 76.0 1024x768 76.0 800x600 73.0 640x480 73.0 640x400 0.0 320x400 0.0 1680x1050_60.00 (0x4f) 146.2MHz h: width 1680 start 1784 end 1960 total 2240 skew 0 clock 65.3KHz v: height 1050 start 1053 end 1059 total 1089 clock 60.0Hz However, colors seem a bit faded and blurry with nouveau drivers. Mouse cursor is invisible if it's placed inside Firefox window, and only one monitor is working. I like open source and if it's possible I'd prefer to use nouveau drivers but a few things should be fixed. I'm curious why nvidia-current drivers from the repository don't work now. I read it has something to do with the new X11 server in Ubuntu 12.04, is it true? How can I get it back to work?

    Read the article

  • Microsoft Azure News: Capturing VM Images

    - by Herve Roggero
    Originally posted on: http://geekswithblogs.net/hroggero/archive/2014/05/21/microsoft-azure-news-capturing-vm-images.aspxIf you have a Virtual Machine (VM) in Microsoft Azure that has a specific configuration, it used to be difficult to clone that VM. You had to sysprep the VM, and clone the data disks. This was slow, prone to errors, and stopped you from being productive. No more! A new option, called Capture, allows you to easily select a VM, running or not. The capture will copy the OS disk and data disks and create a new image out of it automatically for you. This means you can now easily clone an entire VM without affecting productivity.  To capture a VM, simply browse to your Virtual Machines in the Microsoft Azure management website, select the VM you want to clone, and click on the Capture button at the bottom. A window will come up asking to name your image. It took less than 1 minute for me to build a clone of my server. And because it is stored as an image, I can easily create a new VM with it. So that’s what I did… And that took about 5 minutes total.  That’s amazing…  To create a new VM from your image, click on the NEW icon (bottom left), select Compute/Virtual Machine/From Gallery, and select My Images from the left menu when selecting an Image. You will find your newly created image. Because this is a clone, you will not be prompted for a new login; the user id/password is the same. About Herve Roggero Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Blue Syntax Consulting (http://www.bluesyntaxconsulting.com). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

    Read the article

  • Move Data into the Grid for Scalable, Predictable Response Times

    - by JuergenKress
    CloudTran is pleased to introduce the availability of the CloudTran Transaction and Persistence Manager for creating scalable, reliable data services on the Oracle Coherence In-Memory Data Grid (IMDG). Use of IMDG architectures has been key to handling today’s web-scale loads because it eliminates database latency by storing important and frequently access data in memory instead of on disk. The CloudTran product lets developers easily use an IMDG for full ACID-compliant transactions without having to be concerned about the location or spread of data. The system has its own implementation of fast, scalable distributed transactions that does NOT depend on XA protocols but still guarantees all ACID properties. Plus, CloudTran asynchronously replicates data going into the IMDG to back-end datastores and back-up data centers, again ensuring ACID properties. CloudTran can be accessed through Java Persistence API (JPA via TopLink Grid) and now, through a new Low-Level API, or LLAPI. This is ideal for use in SOA applications that need data reliability, high availability, performance, and scalability. Still in limited beta release, the LLAPI gives developers the ability to use standard put/remove logic available in Coherence and then wrap logic with simple Spring annotations or XML+AspectJ to start transactions. An important feature of LLAPI is the ability to join transactions. This is a common outcome for SOA applications that need to reduce network traffic by aggregating data into single cache entries and then doing SOA service processing in the node holding the data. This results in the need to orchestrate transaction processing across multiple service calls. CloudTran has the capability to handle these “multi-client” transactions at speed with no loss in ACID properties. Developing software around an IMDG like Oracle Coherence is an important choice for today’s web-scale applications and services. But this introduces new architectural considerations to maintain scalability in light of increased network loads and data movement. Without using CloudTran, developers are faced with an incredibly difficult task to ensure data reliability, availability, performance, and scalability when working with an IMDG. Working with highly distributed data that is entirely volatile while stored in memory presents numerous edge cases where failures can result in data loss. The CloudTran product takes care of all of this, leaving developers with the confidence and peace of mind that all data is processed correctly. For those interested in evaluating the CloudTran product and IMDGs, take a look at this link for more information: http://www.CloudTran.com/downloadAPI.php, or, send your questions to [email protected]. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: Coherence,cloudtran,cache,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Managing Regulated Content in WebCenter: USDM and Oracle Offer a New Part 11 Compliant Solution for Life Sciences

    - by Michael Snow
    Guest post today provided by Oracle partner, USDM  Regulated Content in WebCenterUSDM and Oracle offer a new Part 11 compliant solution for Life Sciences (White Paper) Life science customers now have the ability to take advantage of all of the benefits of Oracle’s WebCenter Content, a global leader in Enterprise Content Management.   For the past year, USDM has been developing best practice compliance solutions to meet regulated content management requirements for 21 CFR Part 11 in WebCenter Content. USDM has been an expert in ECM for life sciences since 1999 and in 2011, certified that WebCenter was a 21CFR Part 11 compliant content management platform (White Paper).  In addition, USDM has built Validation Accelerators Packs for WebCenter to enable life science organizations to quickly and cost effectively validate this world class solution.With the Part 11 certification, Oracle’s WebCenter now provides regulated life science organizations  the ability to manage REGULATORY content in WebCenter, as well as the ability to take advantage of ALL of the additional functionality of WebCenter, including  a complete, open, and integrated portfolio of portal, web experience management, content management and social networking technology.  Here are a few screen shot examples of Part 11 functionality included in the product: E-Sign, E-Sign Rendor, Meta Data History, Audit Trail Report, and Access Reporting. Gone are the days that life science companies have to spend millions of dollars a year to implement, maintain, and validate ECM systems that no longer meet the ever changing business and regulatory requirements.  Life science companies now have the ability to use WebCenter Content, an ECM system with a substantially lower cost of ownership and unsurpassed functionality.Oracle has been #1 in life sciences because of their ability to develop cost effective, easy-to-use, scalable solutions which help increase insight and efficiency to drive growth for their customers.  Adding a world class ECM solution to this product portfolio allows life science organizations the chance to get rid of costly ECM systems that no longer meet their needs and use WebCenter, part of the Oracle Fusion Technology stack, with their other leading enterprise applications.USDM provides:•    Expertise in Life Science ECM Business Processes•    Prebuilt Life Science Configuration in WebCenter •    Validation Accelerator Packs for WebCenterUSDM is very proud to support Oracle’s expanding commitment to Life Sciences…. For more information please contact:  [email protected] Oracle will be exhibiting at DIA 2012 in Philadelphia on June 25-27. Stop by our booth (#2825) to learn more about the advantages of a centralized ECM strategy and see the Oracle WebCenter Content solution, our 21 CFR Part 11 compliant content management platform.

    Read the article

  • Once installed geos library (C++, and C), and then trying to install rgeos package (R), it reports geos-config missing!

    - by user1873888
    Knowing that the package rgeos, from the R language, requieres a prior installation of geos libraries, I installed, both, libgeos and libgeos-c1 (3.2.2), using the synaptic installer in my Ubuntu 12.04 (32 bit) machine. Then I tried to install rgeos directly from the R console, and it issued a message in the sense that geos-config was not found. The output is as follows: > install.packages("rgeos") Installing package(s) into ‘/home/checo/R/i486-pc-linux-gnu-library/2.15’ (as ‘lib’ is unspecified) also installing the dependency ‘sp’ probando la URL 'http://cran.rstudio.com/src/contrib/sp_1.0-9.tar.gz' Content type 'application/x-gzip' length 882102 bytes (861 Kb) URL abierta ================================================== downloaded 861 Kb probando la URL 'http://cran.rstudio.com/src/contrib/rgeos_0.2-19.tar.gz' Content type 'application/x-gzip' length 221471 bytes (216 Kb) URL abierta ================================================== downloaded 216 Kb * installing *source* package ‘sp’ ... ** package ‘sp’ successfully unpacked and MD5 sums checked ** libs gcc -std=gnu99 -I/usr/share/R/include -DNDEBUG -fpic -O3 -pipe -g -c R centroid.c -o Rcentroid.o gcc -std=gnu99 -I/usr/share/R/include -DNDEBUG -fpic -O3 -pipe -g -c gcdist.c -o gcdist.o gcc -std=gnu99 -I/usr/share/R/include -DNDEBUG -fpic -O3 -pipe -g -c init.c -o init.o gcc -std=gnu99 -I/usr/share/R/include -DNDEBUG -fpic -O3 -pipe -g -c pip.c -o pip.o gcc -std=gnu99 -I/usr/share/R/include -DNDEBUG -fpic -O3 -pipe -g -c pip2.c -o pip2.o gcc -std=gnu99 -I/usr/share/R/include -DNDEBUG -fpic -O3 -pipe -g -c sp_xports.c -o sp_xports.o gcc -std=gnu99 -I/usr/share/R/include -DNDEBUG -fpic -O3 -pipe -g -c surfaceArea.c -o surfaceArea.o gcc -std=gnu99 -I/usr/share/R/include -DNDEBUG -fpic -O3 -pipe -g -c zerodist.c -o zerodist.o gcc -std=gnu99 -shared -o sp.so Rcentroid.o gcdist.o init.o pip.o pip2.o sp_xports.o surfaceArea.o zerodist.o -L/usr/lib/R/lib -lR installing to /home/checo/R/i486-pc-linux-gnu-library/2.15/sp/libs ** R ** data ** demo ** inst ** preparing package for lazy loading ** help *** installing help indices ** building package indices ** installing vignettes ‘intro_sp.Rnw’ ‘over.Rnw’ ** testing if installed package can be loaded * DONE (sp) * installing *source* package ‘rgeos’ ... ** package ‘rgeos’ successfully unpacked and MD5 sums checked configure: CC: gcc -std=gnu99 configure: CXX: g++ configure: rgeos: 0.2-17 checking for /usr/bin/svnversion... no configure: svn revision: 394 checking geos-config usability... ./configure: line 1385: geos-config: command not found no configure: error: geos-config not usable ERROR: configuration failed for package ‘rgeos’ * removing ‘/home/checo/R/i486-pc-linux-gnu-library/2.15/rgeos’ Warning in install.packages : installation of package ‘rgeos’ had non-zero exit status Forgive my ignorance, but I don't know where this file, "geos-config", comes from: should it be generated by the gcc compilations above, or should it be previously installed when the libgeos libraries were intalled? I learnt, from another machine, that "geos-config" is an executable and that it should be installed in /usr/bin. Do you have any idea on what's wrong with my procedure? Thanks, -Sergio.

    Read the article

  • Got Samba, Got PyNeighbourhood but still no connection. What else do I need?

    - by Frank A
    I am sure I had already hit post before but then could only find it by backing through browser. Was it deleted? is the question too dumb, sorry that I do not know the right jargon just trying to get answers to my problem anyway have reworded stuff a bit This seems to be a number one requirement for lots of people and 2 months on from setting up my Ubuntu pc, I am still unable to get a lasting connection in either direction. Adding a windows pc to a network is so easy... just a few clicks and get on with using it all. Using all command approaches and modifying configuration files is hardly user friendly. Googling brings up thousands of solutions but mostly they are too techy or assume the user is fully aware of how to use Linux. I do realise that their must be a lot of flavours for connecting to networks. So far I have installed Samba and fiddled with its config file. The day I did all that it worked from XP to Ubuntu. When I came back two days later to transfer my data over it would not connect. Although the the share does show up in Windows (XP) My Network Places. Today I installed PyNeighbourhood and this shows the Ubuntu box and all of the shares I had created at some point on Ubuntu and it even shows this under the XP workgroup name. But instructions on setting the connection up seem to relate to an earlier version and nothing seems to work there either. (I unshared most of those test folders but they still show up her but that is another question. When I click on mount- I can only click on one on the Ubuntu machine, there is one with no name so I assume this to be my attempt to add one XP Shared drive using ipaddress, I get errors. (gksu:9767): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (gksu:9767): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (gksu:9767): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (gksu:9767): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", mount error(6): No such device or address Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) OK tried to find the manual referred to... only an old comment that manual would be produced for future versions. I saw in another thread that Winbind is needed as well or at least I assume as well? Totally lost again? Please help, what else needs to be installed to connect to win pcs on the network.

    Read the article

  • To make or not to make...python-nautilus a dependency?

    - by George Edison
    That is the question! Okay, all silliness aside, I really am forced to make a difficult decision here. My application is written in C++ and allows other scripts to invoke methods via XML-RPC. One of these scripts is a Nautilus extension written in Python. The extension is packaged with the rest of the application and copied to the appropriate place when installed (/usr/share/nautilus-python/extensions). Now the problem is that the Nautilus extension requires the python-nautilus package to be installed to be operational. So therefore I have three options: Make the python-nautilus package a dependency. This option will ensure that anyone who installs my package will be able to use the Nautilus extension. However, this option will not be attractive to XFCE or KDE users - a ton of python-nautilus's dependencies will be installed on their machines and take up a lot of space - even if they never use Nautilus. Put the python-nautilus package in the suggests: or recommends: field. This option provides the end-user with a way to avoid installing the python-nautilus package (by providing the --no-install-suggests or --no-install-recommends argument to apt-get). However, this won't work when the user installs the package in the Software Center. (I always get mixed up as to which of those two fields are installed by default.) Prompt the user when the application is installed or first launched. This option is more complicated than the others but offers the best compromise between making it easy for the user to install python-nautilus (without going into a technical explanation) and not installing it when the user doesn't need it (or want it). I guess the best way to implement this is a simple prompt that invokes apt-get if the user would like the package installed. Don't install the package at all. This option ensures that nobody has python-nautilus installed on their machine unless they want it. However, this also means that my Nautilus extension will simply not run on the end-user's machine unless they manually install the package. Which of these options seems the best choice? Have I missed any pros and cons for each of the options?

    Read the article

< Previous Page | 925 926 927 928 929 930 931 932 933 934 935 936  | Next Page >