Search Results

Search found 21053 results on 843 pages for 'process'.

Page 465/843 | < Previous Page | 461 462 463 464 465 466 467 468 469 470 471 472  | Next Page >

  • links for 2011-02-25

    - by Bob Rhubart
    The (non) Importance of Language (Enterprise Architecture at Oracle) (tags: ping.fm entarch) ArchBeat (tags: ping.fm) Andrejus Baranovskis's Blog: Beware of Hackers - Keep ADF Task Flows inside WEB-INF Oracle ACE Director Andrejus Baranovskis with a word of caution. (tags: oracle oracleace otn adf) Introduction to WebCenter Personalization: The Conductor; (WebCenter Personalization) Steve Pepper offers an introduction to the Conductor component in Oracle WebCenter Personalization. (tags: oracle otn webcenter enterprise2.0) Batch Aggregation of files in BPEL process instances based on correlation AMIS Technology blog Oracle ACE Director Lucas Jellema shares his solution to a colleague's challenge. (tags: oracle otn oracleace soa bpel) Bradley D. Brown: Watch Out Larry...Here they Come! "Every Fortune 500 company that I've talked to in the last few months is trying to figure out their mobile strategy. Organizations are getting the push from the top down - i.e. executives are asking for data from their mobile devices." - Oracle ACE Director Brad Brown (tags: oracle otn ipad mobilecomputing entarch oracleace) Oracle Technology Network Developer Day - You are the future of Java. Boston, March 3. Designed for the enterprise professional, this event will teach you about the latest developments in the Java Virtual Machine, Java EE, Java SE, Java on the Desktop, and Embedded Java. Whether you're a developer or architect, or managing a team of them, this is an event you can't miss. (tags: oracle otn java)

    Read the article

  • NRF Big Show 2011 -- Part 1

    - by David Dorf
    When Apple decided to open retail stores, they came to 360Commerce (now part of Oracle Retail) to help with the secret project. Similarly, when Disney Stores decided to reinvent itself, they also came to us for their POS system. In both cases visiting a store is an experience where sales take a backseat to entertainment, exploration, and engagement This quote from a recent Stores Magazine article says it all: "We compete based on an experience, emotion and immersion like Disney," says Neal Lassila, vice president of global information technology for Disney. "That's opposed to [competing] on price and hawking a doll for $19.99. There is no sales pressure technique." Instead, it's about delivering "a great time." While you're attending the NRF conference in New York next week, you'll definitely want to stop by the new 20,000 square-foot Disney store in Times Square. If you're not attending, you can always check out the videos to get a feel for the stores' vibe. This year we've invited Disney Stores to open a pop-up store within the Oracle Retail booth. There will be lots of items on sale that fit in your suitcase, and there's no better way to demonstrate our POS, including the mobile POS running on an iPod Touch. You should also plan to attend Tuesday morning's super-session The Magic of the Disney Store: An Immersive Retail Experience with Steve Finney. In the case of Apple and Disney, less POS is actually a good thing. In both cases it was important to make the checkout process fast and easy so as not to detract from the overall experience. There will be ample opportunities to see this play out in New York next week, so I hope you take advantage.

    Read the article

  • Becoming a Certified Information Professional

    - by Lance Shaw
    Yesterday, we participated with AIIM in a webinar about the Certified Information Professional (CIP) program that they are now offering.  The interest level is very high in the program, as evidenced by the high turnout at the event. You might be asking yourself, what does the Oracle WebCenter team care about an AIIM certification program? Well, we sponsored this program because we consistently find that the more educated our customers and prospects are, the more value they are going to get out of the technology we provide.  As an ECM vendor, we provide plenty of WebCenter product training and certifications to help you make the most of WebCenter technology. While these are essential and valuable, technologists that also have an operational command of the business and the various impacts that the flow of information can have are even more valuable to an organization. Thinking about the management of content and information and its effect on business process can have wide-ranging benefits, not only to your company but to your personal bottom line.  And let's be honest, a customer who is looking holistically at how content is managed is going to see more opportunities to leverage that content and in many cases, this will motivate the purchase of additional product licenses.   Now if you are regretting the fact that you missed the webinar yesterday, never fear!  It is now available for playback and you can view it at your convenience by visiting the AIIM website. We hope you find it informative and that you can personally profit from being able to showcase your certification as an Information Professional. Additionally, we hope it will help you identify additional opportunities to leverage Oracle WebCenter in order to further reduce your operational costs and drive your business forward.

    Read the article

  • Can anyone explain step-by-step how the as3isolib depth-sorts isometric objects?

    - by Rob Evans
    The library manages to depth-sort correctly, even when using items of non-1x1 sizes. I took a look through the code but it's a big project to go through line by line! There are some questions about the process such as: How are the x, y, z values of each object defined? Are they the center points of the objects or something else? I noticed that the IBounds defines the bounds of the object. If you were to visualise a cuboid of 40, 40, 90 in size, where would each of the IBounds metrics be? I would like to know how as3isolib achieves this although I would also be happy with a generalised pseudo-code version. At present I have a system that works 90% of the time but in cases of objects that are along the same horizontal line, the depth is calculated as the same value. The depth calculation currently works like this: x = object horizontal center point y = object vertical center point originX and Y = the origin point relative to the object so if you want the origin to be the center, the value would be originX = 0.5, originY = 0.5. If you wanted the origin to be vertical center, horizontal far right of the object it would be originX = 1.0, originY = 0.5. The origin adjusts the position that the object is transformed from. AABB_width = The bounding box width. AABB_height = The bounding box height. depth = x + (AABB_width * originX) + y + (AABB_height * originY) - z; This generates the same depth for all objects along the same horizontal x.

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part I

    - by dbayard
    Abstract: This blog post will show how we used Oracle R Enterprise to tackle a customer’s big calculation problem across a big data set. Overview: Databases are great for managing large amounts of data in a central place with rigorous enterprise-level controls.  R is great for doing advanced computations.  Sometimes you need to do advanced computations on large amounts of data, subject to rigorous enterprise-level concerns.  This blog post shows how Oracle R Enterprise enables R plus the Oracle Database enabled us to do some pretty sophisticated calculations across 1 million accounts (each with many detailed records) in minutes. The problem: A financial services customer of mine has a need to calculate the historical internal rate of return (IRR) for its customers’ portfolios.  This information is needed for customer statements and the online web application.  In the past, they had solved this with a home-grown application that pulled trade and account data out of their data warehouse and ran the calculations.  But this home-grown application was not able to do this fast enough, plus it was a challenge for them to write and maintain the code that did the IRR calculation. IRR – a problem that R is good at solving: Internal Rate of Return is an interesting calculation in that in most real-world scenarios it is impractical to calculate exactly.  Rather, IRR is a calculation where approximation techniques need to be used.  In this blog post, we will discuss calculating the “money weighted rate of return” but in the actual customer proof of concept we used R to calculate both money weighted rate of returns and time weighted rate of returns.  You can learn more about the money weighted rate of returns here: http://www.wikinvest.com/wiki/Money-weighted_return First Steps- Calculating IRR in R We will start with calculating the IRR in standalone/desktop R.  In our second post, we will show how to take this desktop R function, deploy it to an Oracle Database, and make it work at real-world scale.  The first step we did was to get some sample data.  For a historical IRR calculation, you have a balances and cash flows.  In our case, the customer provided us with several accounts worth of sample data in Microsoft Excel.      The above figure shows part of the spreadsheet of sample data.  The data provides balances and cash flows for a sample account (BMV=beginning market value. FLOW=cash flow in/out of account. EMV=ending market value). Once we had the sample spreadsheet, the next step we did was to read the Excel data into R.  This is something that R does well.  R offers multiple ways to work with spreadsheet data.  For instance, one could save the spreadsheet as a .csv file.  In our case, the customer provided a spreadsheet file containing multiple sheets where each sheet provided data for a different sample account.  To handle this easily, we took advantage of the RODBC package which allowed us to read the Excel data sheet-by-sheet without having to create individual .csv files.  We wrote ourselves a little helper function called getsheet() around the RODBC package.  Then we loaded all of the sample accounts into a data.frame called SimpleMWRRData. Writing the IRR function At this point, it was time to write the money weighted rate of return (MWRR) function itself.  The definition of MWRR is easily found on the internet or if you are old school you can look in an investment performance text book.  In the customer proof, we based our calculations off the ones defined in the The Handbook of Investment Performance: A User’s Guide by David Spaulding since this is the reference book used by the customer.  (One of the nice things we found during the course of this proof-of-concept is that by using R to write our IRR functions we could easily incorporate the specific variations and business rules of the customer into the calculation.) The key thing with calculating IRR is the need to solve a complex equation with a numerical approximation technique.  For IRR, you need to find the value of the rate of return (r) that sets the Net Present Value of all the flows in and out of the account to zero.  With R, we solve this by defining our NPV function: where bmv is the beginning market value, cf is a vector of cash flows, t is a vector of time (relative to the beginning), emv is the ending market value, and tend is the ending time. Since solving for r is a one-dimensional optimization problem, we decided to take advantage of R’s optimize method (http://stat.ethz.ch/R-manual/R-patched/library/stats/html/optimize.html). The optimize method can be used to find a minimum or maximum; to find the value of r where our npv function is closest to zero, we wrapped our npv function inside the abs function and asked optimize to find the minimum.  Here is an example of using optimize: where low and high are scalars that indicate the range to search for an answer.   To test this out, we need to set values for bmv, cf, t, emv, tend, low, and high.  We will set low and high to some reasonable defaults. For example, this account had a negative 2.2% money weighted rate of return. Enhancing and Packaging the IRR function With numerical approximation methods like optimize, sometimes you will not be able to find an answer with your initial set of inputs.  To account for this, our approach was to first try to find an answer for r within a narrow range, then if we did not find an answer, try calling optimize() again with a broader range.  See the R help page on optimize()  for more details about the search range and its algorithm. At this point, we can now write a simplified version of our MWRR function.  (Our real-world version is  more sophisticated in that it calculates rate of returns for 5 different time periods [since inception, last quarter, year-to-date, last year, year before last year] in a single invocation.  In our actual customer proof, we also defined time-weighted rate of return calculations.  The beauty of R is that it was very easy to add these enhancements and additional calculations to our IRR package.)To simplify code deployment, we then created a new package of our IRR functions and sample data.  For this blog post, we only need to include our SimpleMWRR function and our SimpleMWRRData sample data.  We created the shell of the package by calling: To turn this package skeleton into something usable, at a minimum you need to edit the SimpleMWRR.Rd and SimpleMWRRData.Rd files in the \man subdirectory.  In those files, you need to at least provide a value for the “title” section. Once that is done, you can change directory to the IRR directory and type at the command-line: The myIRR package for this blog post (which has both SimpleMWRR source and SimpleMWRRData sample data) is downloadable from here: myIRR package Testing the myIRR package Here is an example of testing our IRR function once it was converted to an installable package: Calculating IRR for All the Accounts So far, we have shown how to calculate IRR for a single account.  The real-world issue is how do you calculate IRR for all of the accounts?This is the kind of situation where we can leverage the “Split-Apply-Combine” approach (see http://www.cscs.umich.edu/~crshalizi/weblog/815.html).  Given that our sample data can fit in memory, one easy approach is to use R’s “by” function.  (Other approaches to Split-Apply-Combine such as plyr can also be used.  See http://4dpiecharts.com/2011/12/16/a-quick-primer-on-split-apply-combine-problems/). Here is an example showing the use of “by” to calculate the money weighted rate of return for each account in our sample data set.  Recap and Next Steps At this point, you’ve seen the power of R being used to calculate IRR.  There were several good things: R could easily work with the spreadsheets of sample data we were given R’s optimize() function provided a nice way to solve for IRR- it was both fast and allowed us to avoid having to code our own iterative approximation algorithm R was a convenient language to express the customer-specific variations, business-rules, and exceptions that often occur in real-world calculations- these could be easily added to our IRR functions The Split-Apply-Combine technique can be used to perform calculations of IRR for multiple accounts at once. However, there are several challenges yet to be conquered at this point in our story: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In our next blog post in this series, we will show you how Oracle R Enterprise solved these challenges.

    Read the article

  • Can't get Ubuntu 11.10 working on my VirtualBox running on Mac OsX 10.6.8

    - by stack-o-frankie
    I installed the Guest Additions, installed the isight-firmware-tools by using the AppleUSBVideoSupport file but I still can't get access to the iSight webcam. When I launch vlc v4l2:///dev/video0 I get the following errors: Blocked: call to unsetenv("DBUS_ACTIVATION_ADDRESS") Blocked: call to unsetenv("DBUS_ACTIVATION_BUS_TYPE") [0x92d492c] main libvlc: Running vlc with the default interface. Use 'cvlc' to use vlc without interface. Blocked: call to setlocale(6, "") Blocked: call to setlocale(6, "") (process:2922): Gtk-WARNING **: Locale not supported by C library. Using the fallback 'C' locale. (vlc:2922): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (vlc:2922): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (vlc:2922): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (vlc:2922): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", [0x963287c] v4l2 demux error: VIDIOC_STREAMON failed [0x963287c] v4l2 demux error: cannot set input (Device or resource busy) [0x96430a4] v4l2 access error: VIDIOC_STREAMON failed [0x96430a4] v4l2 access error: cannot set input (Device or resource busy) [0x9371104] main input error: open of `v4l2:///dev/video0' failed: (null) Any clue?

    Read the article

  • Does disable log error for MySQL increasing it's performance ? How disable it?

    - by adnan
    Does disable log error for MySQL increasing it's performance ? How disable it ? This is my service status Server load 0.63 (8 CPUs) Memory Used 23.38% (957,600 of 4,096,000) Swap Used 0% (0 of 1) And this is print screen for process manager http://elnhrda.com/promgr.jpg This is my.cnf [mysqld] query_cache_size=64M skip-name-resolve #innodb_file_per_table=1 query_cache_limit=2M read_buffer_size = 2M read_rnd_buffer_size = 16M sort_buffer_size = 8M join_buffer_size = 8M thread_cache_size = 8 thread_concurrency = 8 innodb_buffer_pool_size = 2G Iam looking for doing any thing to increase my website speed I have VPS 4G.B RAM CENTOS 6 X86_64 Note please : this statics taken now which no any queries executed & site have not any visitors in the same time

    Read the article

  • Why maven so slow compared to automake?

    - by ???'Lenik
    I have a Maven project consists of around 100 modules. I have reason to decompose the project to so many modules, and I don't think I should merge them in order to speed up the build process. I have read a lot of projects by other people, e.g., the Maven project itself, and Apache Archiva, and Hudson project, they all consists of a lot of modules, nearly 100 maybe, more or less. The problem is, to build them all need so much time, 3 hours for the first time build (this is acceptable because a lot of artifacts to download), and 15 minutes for the second build (this is not acceptable). For automake, things are similarly, the first time you need to configure the project, to prepare the magical config.h file, it's far more complex then what maven does. But it's still fast, maybe 10 seconds on my Debian box. After then, make install requires maybe 10 minutes for the first time build. However, when everything get prepared, the .o object files are generated, they don't have to be rebuild at all for the second time build. (In Maven, everything rebuild at everytime.) I'm very wondering, how guys working for Maven projects can bare this long time for each build, I'm just can't sit down calmly during each time Maven build, it took too long time, really.

    Read the article

  • Coherence Management with EM Cloud Control 12c –demo for partners

    - by JuergenKress
    For access to the Oracle demo systems please visit OPN and talk to your Partner Expert We are pleased to announce the availability of the Coherence Management demo that showcases some of the key capabilities of Management Pack for Oracle Coherence and JVM Diagnostics (licensed under WLS Management Pack EE and Management Pack for NonOracle MW). This demo specifically focuses on some of the performance management and configuration management solutions for Oracle Coherence. The demo flow showcases the key enhancements made in Enterprise Manager 12c release which includes new customizable performance summary, cache data management and configuration management. Demo Highlights The demo showcases the following capabilities. Centralized monitoring for enterprise wide Coherence deployments Drill down diagnostics Customizable performance views Monitoring performance trends Monitoring Caches, Nodes, Services, etc Performance and Log Alerts Real-time Java Diagnostics and memory leak analysis Cache Data Management Lifecycle management Provisioning Coherence on a new machine Starting nodes on machine where Coherence is already running Killing a node process Demo Instructions Go to the DSS website for Oracle Partners. On the Standard Demo Launchpad page, under the “Middleware Management” section, click on the link “EM Cloud Control 12c Coherence Management” (tagged as “NEW”). Specific demo launchpad page contains a link to the detailed demo script with instructions on how to show the demo. Read more on Community Events and post your comment here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Coherence,Coherence demo,DSS,CAF,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Coherence Management with EM Cloud Control 12c –demo for partners

    - by JuergenKress
    For access to the Oracle demo systems please visit OPN and talk to your Partner Expert We are pleased to announce the availability of the Coherence Management demo that showcases some of the key capabilities of Management Pack for Oracle Coherence and JVM Diagnostics (licensed under WLS Management Pack EE and Management Pack for NonOracle MW). This demo specifically focuses on some of the performance management and configuration management solutions for Oracle Coherence. The demo flow showcases the key enhancements made in Enterprise Manager 12c release which includes new customizable performance summary, cache data management and configuration management. Demo Highlights The demo showcases the following capabilities. Centralized monitoring for enterprise wide Coherence deployments Drill down diagnostics Customizable performance views Monitoring performance trends Monitoring Caches, Nodes, Services, etc Performance and Log Alerts Real-time Java Diagnostics and memory leak analysis Cache Data Management Lifecycle management Provisioning Coherence on a new machine Starting nodes on machine where Coherence is already running Killing a node process Demo Instructions Go to the DSS website for Oracle Partners. On the Standard Demo Launchpad page, under the “Middleware Management” section, click on the link “EM Cloud Control 12c Coherence Management” (tagged as “NEW”). Specific demo launchpad page contains a link to the detailed demo script with instructions on how to show the demo. Read more on Community Events and post your comment here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Coherence,Coherence demo,DSS,CAF,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • bluetooth headset can connect, but not visible in pulse audio

    - by Kim Marivoet
    I have a plantronics bluetooth headset, and until yesterday I could use it without any problem. However, today it suddenly stopped working (maybe related to the last software update I did). I can still connect/disconnect my headset, but it doesn't show up in pulse audio anymore. I read through various posts that describes kind of the same problem, but none of the suggested solutions worked. I get following error in the syslog: Oct 13 16:49:57 desktop bluetoothd[1040]: Endpoint registered: sender=:1.34 path=/MediaEndpoint/HFPAG Oct 13 16:49:57 desktop bluetoothd[1040]: Endpoint registered: sender=:1.34 path=/MediaEndpoint/A2DPSource Oct 13 16:49:57 desktop bluetoothd[1040]: Endpoint registered: sender=:1.34 path=/MediaEndpoint/A2DPSink Oct 13 16:50:09 desktop kernel: [ 17.340943] input: 48:C1:AC:08:FE:8F as /devices/virtual/input/input14 Oct 13 16:50:09 desktop bluetoothd[1040]: /org/bluez/1040/hci0/dev_48_C1_AC_08_FE_8F/fd0: fd(36) ready Oct 13 16:50:09 desktop rtkit-daemon[1894]: Successfully made thread 2213 of process 1892 (n/a) owned by '1000' RT at priority 5. Oct 13 16:50:09 desktop rtkit-daemon[1894]: Supervising 5 threads of 1 processes of 1 users. Oct 13 16:50:10 desktop bluetoothd[1040]: Badly formated or unrecognized command: AT+XEVENT=USER-AGENT,COM.PLANTRONICS,PLT_VOYAGERPRO,0109,27.90,FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF Oct 13 16:50:10 desktop bluetoothd[1040]: Audio connection got disconnected Any help would be much appreciated. I'm using Ubuntu 12.04. Thanks, Kim

    Read the article

  • How to view bad blocks on mounted ext3 filesystem?

    - by Basilevs
    I've ran fsck -c on the (unmounted) partition in question a while ago. The process was unattended and results were not stored anywhere (except badblock inode). Now I'd like to get badblock information to know if there are any problems with the harddrive. Unfortunately, partition is used in the production system and can't be unmounted. I see two ways to get what I want: Run badblocks in read-only mode. This will probably take a lot of time and cause unnecessary bruden on the system. Somehow extract information about badblocks from the filesystem iteself. How can I view known badblocks registered in mounted filesystem?

    Read the article

  • What is the impact of Windows 8 with UEFI on normal users?

    - by Sam
    I am a normal man-in-the-street computer user and so do not really understand what this is about, but I want to. Can someone please explain to me if: The Windows 8/UEFI secure boot thing will make it impossible to run normal/legacy applications in Windows 8 (as they will be unsigned)? It will turn Windows into an Apple-like system where only Microsoft approved applications can be run? As I say, I'm a normal user, and that is the overall impression I have from reading all the blogs, etc about it. If, on the other hand, all it does is make sure the system is booting a signed OS, how does this prevent malware (which is what at least two Microsoft blogs that I read seemed to be saying), given that most malware is not part of the boot process? The only way I can see this making sense is if it is ensuring that all OS components are signed. Is that it? Like I say, I'm a mortal, so please don't get technical on me, but rather explain how it will affect me, the user.

    Read the article

  • help setting up an IPSEC vpn from my linux box

    - by robthewolf
    I have an office with a router and a remote server (Linux - Ubuntu 10.10). Both locations need to connect to a data supplier through a VPN. The VPN is an IPSEC gateway. I was able to configure my Linksys rv42 router to create a VPN connection successfully and now I need to do the same for Linux server. I have been messing around with this for too long. First I tried OpenVPN, but that is SSL and not IPSEC. Then I tried Shrew. I think I have the settings correct but I haven't been able to create the connection. It maybe that I have to use something else like a direct IPSEC config or something like that. If someone knows of a way to turn the following settings that I have been given below into a working IPSEC VPN connection I would be very grateful. Here are the settings I was given that must be used to connect to my supplier: Local destination network: 192.168.4.0/24 Local destination hosts: 192.168.4.100 Remote destination network: 192.167.40.0/24 Remote destination hosts: 192.168.40.27 VPN peering point: xxx.xxx.xxx.xxx Then they have given me the following details: IPSEC/ISAKMP Phase 1 Parameters: Authentication method: pre shared secret Diffie Hellman group: group 2 Encryption Algorithm: 3DES Lifetime in seconds:28800 Phase 2 parameters: IPSEC security: ESP Encryption algortims: 3DES Authentication algorithms: MD5 lifetime in seconds: 28800 pfs: disabled Here are the settings from my attempt to use shrew: n:version:2 n:network-ike-port:500 n:network-mtu-size:1380 n:client-addr-auto:0 n:network-frag-size:540 n:network-dpd-enable:1 n:network-notify-enable:1 n:client-banner-enable:1 n:client-dns-used:1 b:auth-mutual-psk:YjJzN2QzdDhyN2EyZDNpNG42ZzQ= n:phase1-dhgroup:2 n:phase1-keylen:0 n:phase1-life-secs:28800 n:phase1-life-kbytes:0 n:vendor-chkpt-enable:0 n:phase2-keylen:0 n:phase2-pfsgroup:-1 n:phase2-life-secs:28800 n:phase2-life-kbytes:0 n:policy-nailed:0 n:policy-list-auto:1 n:client-dns-auto:1 n:network-natt-port:4500 n:network-natt-rate:15 s:client-dns-addr:0.0.0.0 s:client-dns-suffix: s:network-host:xxx.xxx.xxx.xxx s:client-auto-mode:pull s:client-iface:virtual s:client-ip-addr:192.168.4.0 s:client-ip-mask:255.255.255.0 s:network-natt-mode:enable s:network-frag-mode:disable s:auth-method:mutual-psk s:ident-client-type:address s:ident-client-data:192.168.4.0 s:ident-server-type:address s:ident-server-data:192.168.40.0 s:phase1-exchange:aggressive s:phase1-cipher:3des s:phase1-hash:md5 s:phase2-transform:3des s:phase2-hmac:md5 s:ipcomp-transform:disabled Finally here is the debug output from the shrew log: 10/12/22 17:22:18 ii : ipc client process thread begin ... 10/12/22 17:22:18 < A : peer config add message 10/12/22 17:22:18 DB : peer added ( obj count = 1 ) 10/12/22 17:22:18 ii : local address 217.xxx.xxx.xxx selected for peer 10/12/22 17:22:18 DB : tunnel added ( obj count = 1 ) 10/12/22 17:22:18 < A : proposal config message 10/12/22 17:22:18 < A : proposal config message 10/12/22 17:22:18 < A : client config message 10/12/22 17:22:18 < A : local id '192.168.4.0' message 10/12/22 17:22:18 < A : remote id '192.168.40.0' message 10/12/22 17:22:18 < A : preshared key message 10/12/22 17:22:18 < A : peer tunnel enable message 10/12/22 17:22:18 DB : new phase1 ( ISAKMP initiator ) 10/12/22 17:22:18 DB : exchange type is aggressive 10/12/22 17:22:18 DB : 217.xxx.xxx.xxx:500 <- 206.xxx.xxx.xxx:500 10/12/22 17:22:18 DB : c1a8b31ac860995d:0000000000000000 10/12/22 17:22:18 DB : phase1 added ( obj count = 1 ) 10/12/22 17:22:18 : security association payload 10/12/22 17:22:18 : - proposal #1 payload 10/12/22 17:22:18 : -- transform #1 payload 10/12/22 17:22:18 : key exchange payload 10/12/22 17:22:18 : nonce payload 10/12/22 17:22:18 : identification payload 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local supports nat-t ( draft v00 ) 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local supports nat-t ( draft v01 ) 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local supports nat-t ( draft v02 ) 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local supports nat-t ( draft v03 ) 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local supports nat-t ( rfc ) 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local supports DPDv1 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local is SHREW SOFT compatible 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local is NETSCREEN compatible 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local is SIDEWINDER compatible 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local is CISCO UNITY compatible 10/12/22 17:22:18 = : cookies c1a8b31ac860995d:0000000000000000 10/12/22 17:22:18 = : message 00000000 10/12/22 17:22:18 - : send IKE packet 217.xxx.xxx.xxx:500 - 206.xxx.xxx.xxx:500 ( 484 bytes ) 10/12/22 17:22:18 DB : phase1 resend event scheduled ( ref count = 2 ) 10/12/22 17:22:18 ii : opened tap device tap0 10/12/22 17:22:28 - : resend 1 phase1 packet(s) 217.xxx.xxx.xxx:500 - 206.xxx.xxx.xxx:500 10/12/22 17:22:38 - : resend 1 phase1 packet(s) 217.xxx.xxx.xxx:500 - 206.xxx.xxx.xxx:500 10/12/22 17:22:48 - : resend 1 phase1 packet(s) 217.xxx.xxx.xxx:500 - 206.xxx.xxx.xxx:500 10/12/22 17:22:58 ii : resend limit exceeded for phase1 exchange 10/12/22 17:22:58 ii : phase1 removal before expire time 10/12/22 17:22:58 DB : phase1 deleted ( obj count = 0 ) 10/12/22 17:22:58 ii : closed tap device tap0 10/12/22 17:22:58 DB : tunnel stats event canceled ( ref count = 1 ) 10/12/22 17:22:58 DB : removing tunnel config references 10/12/22 17:22:58 DB : removing tunnel phase2 references 10/12/22 17:22:58 DB : removing tunnel phase1 references 10/12/22 17:22:58 DB : tunnel deleted ( obj count = 0 ) 10/12/22 17:22:58 DB : removing all peer tunnel refrences 10/12/22 17:22:58 DB : peer deleted ( obj count = 0 ) 10/12/22 17:22:58 ii : ipc client process thread exit ...

    Read the article

  • Why would 70-persistent-net.rules have no effect?

    - by Wes Felter
    I've got a saucy server with a lot of NICs and they end up with weird names like "rename19". I know interface names can be changed by modifying the /etc/udev/rules.d/70-persistent-net.rules file. The first clue that something is wrong is that that file did not exist even though it's supposed to be created automatically. So I decided to write my own based on advice from Linux From Scratch: ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:06:00.0", NAME="eth0" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:06:00.1", NAME="eth1" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:06:00.2", NAME="eth2" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:06:00.3", NAME="eth3" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:0c:00.0", NAME="mezz0" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:0c:00.1", NAME="mezz1" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:1b:00.0", NAME="slot1a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:1b:00.1", NAME="slot1b" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:20:00.0", NAME="slot2a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:20:00.1", NAME="slot2b" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:11:00.0", NAME="slot3a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:11:00.1", NAME="slot3b" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:8b:00.0", NAME="slot4a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:8b:00.1", NAME="slot4b" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:90:00.0", NAME="slot5a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:90:00.1", NAME="slot5b" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:95:00.0", NAME="slot6a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:95:00.1", NAME="slot6b" (I'm matching on PCI IDs instead of MAC addresses because I have multiple identical machines that I want to apply this configuration to.) After rebooting, nothing has changed. It's like these rules aren't even being read. There's not much going on in dmesg either: $ dmesg | grep udev [ 3.196629] systemd-udevd[323]: starting version 204 [ 6.719140] systemd-udevd[550]: starting version 204 [ 38.695050] init: udev-fallback-graphics main process (1658) terminated with status 1

    Read the article

  • How can a usb be detected but not show up anywhere?

    - by George Mauer
    I started the morning by trying to create a bootable usb using a 2gb stick and the startup disk creator. It seemed to run through the whole process just fine until it got to a screen that read something like "Creating memory partion" and which sat on 100% for about 45 minutes before I hit cancel and removed the usb stick. Now the usb stick is not being detected as storage or...anything (even on my windows pc) though it does show up in the syslog. Allow me to demonstrate. We start with the usb not plugged in: [georgemauer@ubuntu:~]$ sudo fdisk -l (04-04 16:01) Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x994bdc0f Device Boot Start End Blocks Id System /dev/sda1 2048 27650047 13824000 27 Hidden NTFS WinRE /dev/sda2 * 27650048 27854847 102400 7 HPFS/NTFS/exFAT /dev/sda3 27854848 976771119 474458136 7 HPFS/NTFS/exFAT I plug in the usb: [georgemauer@ubuntu:~]$ tail -f /var/log/syslog ***Snip*** Apr 4 15:01:18 ubuntu wpa_supplicant[1136]: WPA: Group rekeying completed with 00:24:36:ad:e7:3f [GTK=TKIP] Apr 4 15:02:29 wpa_supplicant[1136]: last message repeated 3 times Apr 4 15:02:29 ubuntu kernel: [22122.788133] usb 2-1: new high speed USB device number 13 using ehci_hcd Apr 4 15:02:29 ubuntu kernel: [22122.923873] scsi10 : usb-storage 2-1:1.0 Apr 4 15:02:29 ubuntu mtp-probe: checking bus 2, device 13: "/sys/devices/pci0000:00/0000:00:13.2/usb2/2-1" Apr 4 15:02:30 ubuntu mtp-probe: bus: 2, device: 13 was not an MTP device Apr 4 15:02:30 ubuntu kernel: [22123.926154] scsi 10:0:0:0: Direct-Access GENERIC USB Mass Storage 1.00 PQ: 0 ANSI: 2 Apr 4 15:02:30 ubuntu kernel: [22124.105118] sd 10:0:0:0: Attached scsi generic sg1 type 0 Apr 4 15:02:30 ubuntu kernel: [22124.108212] sd 10:0:0:0: [sdb] Attached SCSI removable disk but then: [georgemauer@ubuntu:~]$ ls /mnt -alF (04-04 16:02) total 8 drwxr-xr-x 2 root root 4096 2011-04-21 12:51 ./ drwxr-xr-x 26 root root 4096 2012-03-31 13:16 ../ [georgemauer@ubuntu:~]$ ls /media -alF (04-04 16:03) total 8 drwxr-xr-x 2 root root 4096 2012-04-04 12:18 ./ drwxr-xr-x 26 root root 4096 2012-03-31 13:16 ../ What could be going on and how do I recover my usb key?

    Read the article

  • Pre game loading time vs. in game loading time

    - by Keeper
    I'm developing a game in which a random maze is included. There are some AI creatures, lurking the maze. And I want them to go in some path according to the mazes shape. Now there are two possibilities for me to implement that, the first way (which I used) is by calculating several wanted lurking paths once the maze is created. The second, is by calculating a path once needed to be calculated, when a creature starts lurking it. My main concern is loading times. If I calculate many paths at the creating of the maze, the pre loading time is a bit long, so I thought about calculating them when needed. At the moment the game is not 'heavy' so calculating paths in mid game is not noticeable, but I'm afraid it will once it will get more complicated. Any suggestions, comments, opinions, will be of help. Edit: As for now, let p be the number of pre-calculated paths, a creatures has the probability of 1/p to take a new path (which means a path calculation) instead of an existing one. A creature does not start its patrol until the path is fully calculated of course, so no need to worry about him getting killed in the process.

    Read the article

  • Content Flood and Frustration? ==> Content Salvation via WebCenter

    - by Michael Snow
    If you are still stuck on Documentum and somehow have missed hearing about our Move-Off Documentum program. Here’s a refresher on the basics to help save you from your ongoing frustration. Check out Capitalizing on Content How much content have you pushed around over email, through collaboration, and via social channels this week? Have you been engaged with a process that has been smooth, problem-free, and leaves you with a lasting impression of a great user experience? How are you managing your online presence to ensure that your customers, partners, employees and potential future customers are experiencing a consistently engaging web experience? You really can’t do this without a solid Web Experience Management platform. Learn more from the CITO Research White Paper: Creating a Successful and Meaningful Customer Experience on the Web If you'd like to catch up on what Oracle WebCenter has been doing lately - check out the latest recording now available OnDemand: March 2012 Quarterly Customer Update Webcast. Amazing how much can be done in a day on the internet. It’s even more impressive to see this presented in an infographic like the one below by mbaonline.com. How much longer can you manage the ever increasing volumes of content without some technology assistance?  Created by: MBAOnline.com

    Read the article

  • 12.04 kernel panic

    - by jesus
    Hello I'm having trouble with Ubuntu 12.04. I downloaded the ISO and burned it into a CD and tried to install Ubuntu, but during the installation process, I got this message Kernel panic-not syncing Attempted to kill init! pid:1,comm:init not tainted 3.2.0-23-generic-pae#36-ubunru then it says switching to console text, and it just hangs in there and nothing happens. I burned another CD with Ubuntu 10.04, but I get a different problem: Ubuntu just gets stuck at the splash screen with the 5 dots which move from red to white, and nothing happens. I was able to boot before with 10.04 but I recently upgraded my computer graphics card and power supply. please see my system specs. Gigabyte GA-M68M-S2P (rev. 2.3) GeForce 7025 Chipset Athalon II x4 640 QuadCore Radeon HD 6870 sapphire OCZ ModXStream Pro 600W Modular High Performance Power Supply 3 hard drives one is IDE and the other two SATA IDE DVD Drive If any one could help me, I'd be most appreciative. If it helps I tried the F6 option like acpi=off noapic nolapic nomodset but i still get kernel panic

    Read the article

  • Recruiters intentionally present one good candidate for an available job

    - by Jeff O
    Maybe they do it without realizing. The recruiter's goal is to fill the job as soon as possible. I even think they feel it is in their best interest that the candidate be qualified, so I'm not trying to knock recruiters. Aren't they better off presenting 3 candidates, but one clearly stands out? The last thing they want from their client is a need to extend the interview process because they can't decide. If the client doesn't like any of them, you just bring on your next good candidate. This way they hedge their bet a little. Any experience, insight or ever heard of a head-hunter admit this? Does it make sense? There has to be a reason why the choose such unqualified people. I've seen jobs posted that clearly state they want someone with a CS degree and the recruiter doesn't take it literally. I don't have a CS degree or Java experience and still they think I'm a possible fit.

    Read the article

  • UPDATE: Keeping It Clean in San Francisco

    - by Oracle OpenWorld Blog Team
    by Karen Shamban The results are in, and September 15 was a huge success for the organizers of Coastal Cleanup Day - and more important, for our beautiful and unique California coastal environment.   Here are some inspiring stats. More than: 1,500 volunteers reported in for duty at the Ocean Beach cleanup location (including 150 Oracle employees and family members) 57,000 volunteers participated statewide 320 tons picked up, including 534,115 pounds of trash 105,816 pounds of recyclable materials  Remember: KEEP IT CLEAN! You don't have to wait for the annual Coastal Cleanup Day to do your part. The beaches, fish, mammals, birds, and your fellow human beings will thank you. Join us on September 15, when California's largest volunteer event -- Coastal Cleanup Day -- is taking place. You can help by joining Oracle, Oracle partners, and many others at the Ocean Beach cleanup.  Be sure to check in at the Oracle table that will be set up there. You'll receive an Oracle t-shirt for participating (while supplies last), and can sign up to receive an emailed code that will get you a complimentary Discover pass* to Oracle OpenWorld and JavaOne. And be sure to get yourself into the group photo, which will be shown on the Oracle OpenWorld and JavaOne Websites. When and where: Ocean Beach at Fulton Street, San Francisco Saturday, September 15, 2012 ">9 a.m. to Noon Click here for more information, and to register. *Note: Oracle employees should register for the Ocean Beach cleanup here, and must register for Oracle OpenWorld or JavaOne using the standard employee registration process. Oracle employees are not eligible for the Discover pass offer.

    Read the article

  • Is "as long as it works" the norm?

    - by q303
    Hi, My last shop did not have a process. Agile essentially meant they did not have a plan at all about how to develop or manage their projects. It meant "hey, here's a ton of work. Go do it in two weeks. We're fast paced and agile." They released stuff that they knew had problems. They didn't care how things were written. There were no code reviews--despite there being several developers. They released software they knew to be buggy. At my previous job, people had the attitude as long as it works, it's fine. When I asked for a rewrite of some code I had written while we were essentially exploring the spec, they denied it. I wanted to rewrite the code because code was repeated in multiple places, there was no encapsulation and it took people a long time to make changes to it. So essentially, my impression is this: programming boils down to the following: Reading some book about the latest tool/technology Throwing code together based on this, avoiding writing any individual code because the company doesn't want to "maintain custom code" Showing it and moving on to the next thing, "as long as it works." I've always told myself that next job I'm going to get a better shop. It never happens. If this is it, then I feel stuck. The technologies always change; if the only professional development here is reading the latest MS Press technology book, then what have you built in 10 years but a superficial knowledge of various technologies? I'm concerned about: Best way to have professional standards How to develop meaningful knowledge and experience in this situation

    Read the article

  • Does it actually matter whether you have open applications when installing new software?

    - by Dan
    It seems the norm these days is for installers/setup programs to request that you close all open applications before initiating the install process for a piece of new software. I used to obediently follow these directions without fail, even though it could sometimes be frustrating having to close open documents and stop working on things just to get a new, seemingly unrelated application installed. Then at some point I simply stopped bothering. Nowadays if I have a lot of stuff going on I might even run multiple installers at the same time; I can't even recall a time it has ever posed a problem. Why do setup programs even make this request in the first place, then, when it appears to be unnecessary? Is this just to simplify troubleshooting for companies' support people? Has anyone else ever run into problems as a result of trying to install an app while other apps were open?

    Read the article

  • No GUI boot; startx error, I suspect no filesystem corruption.

    - by Dharmaj Soni
    Till yesterday, my Ubuntu 9.10 was working fine. I had watched a movie using vlc. I had also charged my ipod using the laptop. Today, when I started it, I automatically booted into command line. There seems to be no filesystem corruption etc as I can view/open (text) files. Before the CLI appeared, the screen blinked with a cursor, then the white Ubuntu logo flashed, and then I got the CLI login prompt. After logging in, if I try startx, to start gnome, I get the following error after a few seconds: giving up xinit: No such file or directory (errno 2): unable to connect to X server xinit: No such process (errno 3): Server error* The same error comes up, even if I use sudo, or if I change my directory to '/' before using startx, and also when, from the grub, using the recovery mode option to load into CLI, and then trying startx. On trying command 'xinit', I get "Server error" Also, on trying GDM, I get 2 errors. I cannot connect to the internet in this state. Thanks for any help. I am using Dell Inspiron 1440, no special graphics card.

    Read the article

  • Firefox Master Password (ssh-agent)

    - by BCable
    I use the master password feature of Firefox, and I also use SSH keys to login to a bunch of UNIX machines. For SSH, there is a very useful application called ssh-agent that runs in the background knowing the required information about unlocking the key so you don't have to type the question every single time you want to connect. I open and close Firefox a lot, so I was curious, is there a way to have Firefox run in the background (preferrably doing nothing, but the whole process would be fine I guess as well) so that I don't have to type my master password every single time I open Firefox? Thanks!

    Read the article

< Previous Page | 461 462 463 464 465 466 467 468 469 470 471 472  | Next Page >