Search Results

Search found 6392 results on 256 pages for 'bash history'.

Page 186/256 | < Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >

  • Laptop drains of quickly with battery

    - by Shyam
    I am a user since years in ubuntu and I have not come across this problem with ubuntu till date. My battery drains off immediately after I unplug my AC power. The options I tried: 1) checked the battery state with : cat /proc/acpi/battery/BAT0/state present: yes capacity state: ok charging state: charged present rate: 0 mA remaining capacity: 392 mAh present voltage: 12476 mV Initially it was showing charging state: charging after 5mins it started displaying as charged. ! Based on that if I remove my AC Power it shows low battery notification. 2) When I run acpi : acpi -b Battery 0: Unknown, 9% The battery state shows as unknown. But initially when we plug-in to AC adapter acpi -b Battery 0: Charging, 9%, 13:04:00 until charged 3) When the check the same with : upower -i /org/freedesktop/UPower/devices/battery_BAT0 native-path: /sys/devices/LNXSYSTM:00/device:00/PNP0A08:00/device:02/PNP0C09:00/PNP0C0A:00/power_supply/BAT0 vendor: HP power supply: yes updated: Thu Nov 1 16:06:40 2012 (20 seconds ago) has history: yes has statistics: yes battery present: yes rechargeable: yes state: charging energy: 4.2336 Wh energy-empty: 0 Wh energy-full: 33.1128 Wh energy-full-design: 33.1128 Wh energy-rate: 5.6052 W voltage: 12.474 V time to full: 5.2 hours percentage: 12.7854% capacity: 100% technology: lithium-ion Is the power stats output, It says 5hrs to charge completely, If I charge it even more than 5hrs and unplug the AC power, It again cribs stating LOW BATTERY !! The same thing does not happen with Windows7. Any suggestions/ help will be greatly appreciated.

    Read the article

  • CodePlex Daily Summary for Saturday, October 05, 2013

    CodePlex Daily Summary for Saturday, October 05, 2013Popular ReleasesEvent-Based Components AppBuilder: AB3.Iteration.53: Iteration 53 (Feature): Allow drag&drop of existing component (flow, step) from component list to chart. Duplicate names are automatically recognized and solved. By the color of the draged component you can see what kind of component (flow or step) is currently draged. New: AddExistingComponentFlow, PartDragDropEventHandler, ExistingStepPreparerLearning JQuery 1.3 And Above Examples: jQuery Demo whats got Added in 1.8: Getting Startedhttp://jqdemos.darshanmarathe.com Getting JQueryJQuery CDN Google Click Microsoft Click Media Temple Click Creating your first page Show/Hide JavaScript fundamentals JavaScript as a Scripting language JavaScript as a functional programming language JavaScript as a dynamic programming language Selectors CSS Selectors Attribute Selectors Custom Selectors Form Selectors Events Page Load/Ready Events Binding Events Compund Events Tricks and other fundas Effects Inline css Modi...Pulse: Pulse 0.6.7.3: Pulse is now accepting donations. To donate by Bitcoin or PayPal see https://pulse.codeplex.com/wikipage?title=Donations Lots of updates in v0.6.7.3: (Feature) New option allows you to disable wallpaper changing when a full screen application is running. This way Pulse doesn't slow down/lag your videos and games :) (Fix) Some users were getting Wallbase errors when logging in. This has been fixed. (Feature) Right click a provider and you can now make a copy of it by selecting the "Dupl...Upida.Net: Upida.Net 2.2: Example "MyClients" fixed and updated Redundant libraries removed in examples References fixed in examplesCompare .NET Objects: Version 1.7.3.0: Fix for problem with enum showing type in the breadcrumb Changed skip of same class from GetHashCode to use ReferenceEquals Applied patch 15082 from FarrisMoreTerra (Terraria World Viewer): MoreTerra 1.11.1: Release 1.11.1 =========== =Bug Fixes= =========== Added more tile blocks (Clouds, crimstone) Added items (binoculars, rope, Pirahna Gun) Added ores (Lead, Tin) Chests now work, I broke them yesterday. =============== =Known Issues= =============== I am having trouble with new background walls. So you will see a red outline for crimson then a pink inside. Same with where I think the queen bee lives.VG-Ripper & PG-Ripper: PG-Ripper 1.4.19: NEW: Added Option to login as Guest NEW: Added Menu Option to delete an Forum Account NEW: Added Support for "ImageTeam.org links FIXED: Fixed Ripping of http://forum.babeunion.com ForumsBeetle.js: Beetle.js v0.9: Beetle.js Beta v0.9DNN® Form and List: DNN Form and List 06.00.07: DotNetNuke Form and List 06.00.06 Changes to 6.0.7•Fixed an error in datatypes.config that caused calculated fields to be missing in 6.0.6 Changes to 6.0.6•Add in Sql to remove 'text on row' setting for UserDefinedTable to make SQL Azure compatible. •Add new azureCompatible element to manifest. •Added a fix for importing templates. Changes to 6.0.2•Fix: MakeThumbnail was broken if the application pool was configured to .Net 4 •Change: Data is now stored in nvarchar(max) instead of ntext C...Trace Reader for Microsoft Dynamics CRM: Trace Reader (1.2013.10.3): Fix a bug when the first caracter of a description line is '[' Add search featureSimpleExcelReportMaker: Serm 0.03: SourceCode and Sample .Net Framework 3.5 AnyCPU compile.Application Architecture Guidelines: App Architecture Guidelines 3.0.8: This document is an overview of software qualities, principles, patterns, practices, tools and libraries.BlackJumboDog: Ver5.9.6: 2013.09.30 Ver5.9.6 (1)SMTP???????、???????????????? (2)WinAPI??????? (3)Web???????CGI???????????????????????Microsoft Ajax Minifier: Microsoft Ajax Minifier 5.2: Mostly internal code tweaks. added -nosize switch to turn off the size- and gzip-calculations done after minification. removed the comments in the build targets script for the old AjaxMin build task (discussion #458831). Fixed an issue with extended Unicode characters encoded inside a string literal with adjacent \uHHHH\uHHHH sequences. Fixed an IndexOutOfRange exception when encountering a CSS identifier that's a single underscore character (_). In previous builds, the net35 and net20...AJAX Control Toolkit: September 2013 Release: AJAX Control Toolkit Release Notes - September 2013 Release (Updated) Version 7.1002September 2013 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4.5 – AJAX Control Toolkit for .NET 4.5 and sample site (Recommended). AJAX Control Toolkit .NET 4 – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Important UpdateThis release has been updated to fix two issues: Upda...WDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: WDTVHubGen.v2.1.4.apifix-alpha: WDTVHubGen.v2.1.4.apifix-alpha is for testers to figure out if we got the NEW api plugged in ok. thanksVisual Log Parser: VisualLogParser: Portable Visual Log Parser for Dotnet 4.0AudioWordsDownloader: AudioWordsDownloader 1.1 build 88: New features list of words (mp3 files) is available upon typing when a download path is defined list of download paths is added paths history settings added Bug fixed case mismatch in word search field fixed path not exist bug fixed when history has been used path, when filled from dialog, not stored refresh autocomplete list after path change word sought is deleted when path is changed at the end sought word list is deleted word list not refreshed download ends. word lis...Wsus Package Publisher: Release v1.3.1309.28: Fix a bug, where WPP crash when running on a computer where Windows was installed in another language than Fr, En or De, and launching the Update Creation Wizard. Fix a bug, where WPP crash if some Multi-Thread job are launch with more than 64 items. Add a button to abort "Install This Update" wizard. Allow WPP to remember which columns are shown last time. Make URL clickable on the Update Information Tab. Add a new feature, when Double-Clicking on an update, the default action exec...Tweetinvi a friendly Twitter C# API: Alpha 0.8.3.0: Version 0.8.3.0 emphasis on the FIlteredStream and ease how to manage Exceptions that can occur due to the network or any other issue you might encounter. Will be available through nuget the 29/09/2013. FilteredStream Features provided by the Twitter Stream API - Ability to track specific keywords - Ability to track specific users - Ability to track specific locations Additional features - Detect the reasons the tweet has been retrieved from the Filtered API. You have access to both the ma...New ProjectsC# GUI Oscillocope: c# oscilloscope gui programmingDefinitive Business Management: Business ManagementEnough Connectivity: Enough Connectivity eases the access to Bluetooth and other devices that are connected via the SerialPort on Netduino, .NET Gadgeteer or .NET Micro Framework.Excel add-in for the Intel Math Kernel Library: Expose Intel MKL functionality in Excel.Fix SP 2013 Multitenant Missing BCS and SSS Links: This Project fixes 3 issues in SP 2013 Enterprise Multitenancygaragemanagement: Auto Garage Management SoftwareHost: YibushanrenIRSystem: it's a development project for user activitiesKSP Vessel Viewer: A tiny tool to display craft-file informationLerniXml: Learning XML (+ related technologies)lkasdjlkaslkdljkasd: aPersonal Accountant: Simple and flexible web service for personal finance accounting.Quan Ly Nha Hang: Qu?n lý nhà hàngTicTacToe Ultimate: Services for Score board for Tic Tac Toe game like, but harder and more interesting.Unconfused Bills .net: Unconfuse your bills with this Google Calendar integrated budgeting toolVisual Studio 2010 Extensions - Mike Parks & Cory Cissell: Source code to the extensions Cory and I made a few years back.

    Read the article

  • Permission denied on /dev/xvdf despite sudo?

    - by Sid
    I'm trying to get a stream off port 9999 and write to /dev/xvdf. I'm using Amazon EC2 with the Ubuntu 11.10 server image. By default I log in as 'ubuntu' which has sudo privileges. However, when I run the following netcat command, I get this error ubuntu@ip-10-252-35-122:~$ ls -al /dev/xv* brw-rw---- 1 root disk 202, 1 2011-11-30 22:22 /dev/xvda1 brw-rw---- 1 root disk 202, 80 2011-11-30 22:27 /dev/xvdf ubuntu@ip-10-252-35-122:~$ sudo netcat -p 9999 -l > /dev/xvdf bash: /dev/xvdf: Permission denied ubuntu@ip-10-252-35-122:~$ Any idea why I get the permission denied error and how I can work around it? Update: Something mysterious is running in the background that resets the permissions? Check the snippet below and the permission flags seem to automatically reset when I try using /dev/xvdf !?! ubuntu@ip-10-252-35-122:~$ sudo chmod 777 /dev/xvdf ubuntu@ip-10-252-35-122:~$ ls -al /dev/xvdf brwxrwxrwx 1 root disk 202, 80 2011-11-30 22:43 /dev/xvdf ubuntu@ip-10-252-35-122:~$ sudo nc -p 9999 -l > /dev/xvdf This is nc from the netcat-openbsd package. An alternative nc is available in the netcat-traditional package. usage: nc [-46DdhklnrStUuvzC] [-i interval] [-P proxy_username] [-p source_port] [-s source_ip_address] [-T ToS] [-w timeout] [-X proxy_protocol] [-x proxy_address[:port]] [hostname] [port[s]] ubuntu@ip-10-252-35-122:~$ ls -al /dev/xvdf brw-rw---- 1 root disk 202, 80 2011-11-30 22:43 /dev/xvdf ubuntu@ip-10-252-35-122:~$ I'm using stock Ubuntu Amazon EC2 images from http://alestic.com/ (links to images at the top)

    Read the article

  • How to convert rmvb to mp4? (or, why ffmpeg doesn't work?)

    - by Tom Brito
    Hi, I'm trying to use this script to convert an rmvb video to mp4, but I'm having problems with the ffmpeg. In apt-get there's only ffmpeg0 and ffmpeg-dev, I installed both, but the script doesn't work, it's saying that ffmpeg was not found. Any hint on this? --update The script I'm talking about: #!/bin/bash tipo=$1 arqv=$2 resolucao=$3 tipoarq=$4 help() { clear echo "Convertor de Vídeos para MP4" echo "Parametro 1 = Tipo: (A - Arquivo/D - Diretório)" echo "Parametro 2 = Arquivo/Caminho" echo "Parametro 3 = Resolução" echo "Parametro 4 = Tipo de Arquivos de Entrada (rmvb, avi, mpeg)" } if [ "$tipo" = "" -o "$arqv" = "" -o "$resolucao" = "" -o "$tipoarq" = "" ]; then help; exit fi if [ "$tipo" = "D" ]; then count=`ls "$arqv"/*.$tipoarq | wc -l` else count=1 fi echo "$count arquivos encontrados para converter." x=0 while [ ! $x -ge $count ]; do x=`echo $x + 1 | bc` if [ "$tipo" = "D" ]; then nome=`ls "$arqv"/*.$tipoarq | head -n $x | tail -n 1` else nome=$arqv fi echo "Convertendo $nome ..." ffmpeg -i "$nome" -acodec libfaac -ab 128kb -vcodec mpeg4 -b 1200kb -mbd 2 -cmp 2 -subcmp 2 -s $resolucao "`echo $nome | sed "s/\.$tipoarq//g"`".mp4 done exit --update Using WinFF it gives: Unknown encoder 'libx264' I've installed both the existing packages libx264-67 and libx264-dev, but none solved. Looking for more alternatives...

    Read the article

  • We Found 100 Manufacturing Heros That Focus on Innovation!

    - by Stephen Slade
    There’s a good piece written by Ann Grackin of ChainLink Research on the Manufacturing Leadership 100 Awards program held recently in Palm Beach Fl, Apr 30-May 3, 2012.  This article (link below) highlights the summary of the Summit with specific focus on manufacturing innovation.  There were several informative keynotes and sessions from industrial leaders who are leveraging the latest tools and technologies to make better decisions. Ann writes that she was a panelist with Cindy Reese, SVP, Worldwide Operations, Oracle; and Steven Tungate, VP/GM, Supply Chain & Innovation, Toshiba America Business Solutions about Factories and Supply Networks of the Future. Ann writes “So what are these manufacturers doing? Significant rationalization of the supply base (Toshiba, especially, has this issue since they have a long history of many acquisitions), streamlining production to increase productivity, and looking for lower-cost countries for manufacturing….  No doubt firms have global customer bases, so they need to be present in these markets. However, a low-cost-country manufacturing source does introduce more risk in the supply chain. And that was discussed. Quality, security, and intellectual property protection were the critical global manufacturing issues also discussed. “Cindy (Reese) told a fascinating story about Oracle’s acquisition of Sun and the supply chain that was subsequently created. Here was one of the key points: Although Oracle sells on a global basis, they now do their own factory-installed software. This keeps potential ‘factory-installed malware’ from getting into the servers at contract manufacturers, and prevents pirated software. In this way, Oracle ensures that they deliver the quality and security people expect”. Learn more about the Manufacturing Leadership 100 program from Manufacturing Executive at: http://www.mlsummit.com/ Full Article Link:  http://www.clresearch.com/research/detail.cfm?guid=52327213-3048-79ED-99D4-E433DA64D4F0

    Read the article

  • Cannot install openjdk on Hardy Heron

    - by infaustus
    I know that Hardy Heron is very old but don't ask why Hardy... I've tried root@vz10931:/etc/apt# apt-get install openjdk-6-jre Reading package lists... Done Building dependency tree Reading state information... Done You might want to run `apt-get -f install' to correct these: The following packages have unmet dependencies: openjdk-6-jre: Depends: libasound2 (> 1.0.14) but it is not going to be installed Depends: libgif4 (>= 4.1.6) but it is not going to be installed Depends: libxtst6 but it is not going to be installed Depends: openjdk-6-jre-headless (>= 6b18-1.8.3-0ubuntu1~8.04.2) but it is not going to be installed vim: Depends: vim-common (= 1:7.1-138+1ubuntu3.1) but 2:7.3.154+hg~74503f6ee649-2ubuntu3 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). My sources.list deb http://pl.archive.ubuntu.com/ubuntu/ hardy main restricted universe multiverse deb-src http://pl.archive.ubuntu.com/ubuntu/ hardy main restricted universe multiverse deb http://pl.archive.ubuntu.com/ubuntu/ hardy-updates main restricted universe multiverse deb-src http://pl.archive.ubuntu.com/ubuntu/ hardy-updates main restricted universe multiverse deb http://security.ubuntu.com/ubuntu hardy-security main restricted universe multiverse deb-src http://security.ubuntu.com/ubuntu hardy-security main restricted universe multiverse And root@vz10931:/etc/apt# ls -l sources.list.d/ total 0 Please help. When I've tried apt-get install -f I had install new system because everything crashed. Edit: I checked that i have openjdk installed root@vz10931:/var/www/mailer# dpkg --list | grep java iU sun-java6-bin 6.24-1build0.8.04.1 Sun Java(TM) Runtime Environment (JRE) 6 (ar iU sun-java6-jdk 6.24-1build0.8.04.1 Sun Java(TM) Development Kit (JDK) 6 iU sun-java6-jre 6.24-1build0.8.04.1 Sun Java(TM) Runtime Environment (JRE) 6 (ar but when i am trying to start java file: java -jar program.jar error appear -bash: java: command not found

    Read the article

  • What FOSS solutions are available to manage software requirements?

    - by boos
    In the company where I work, we are starting to plan to be compliant to the software development life cycle. We already have, wiki, vcs system, bug tracking system, and a continuous integration system. The next step we want to have is to start to manage, in a structured way, software requirements. We dont want to use a wiki or shared documentation because we have many input (developer, manager, commercial, security analyst and other) and we dont want to handle proliferation of .doc around the network share. We are trying to search and we hope we can find and use a FOSS software to manage all this things. We have about 30 people, and don't have a budget for commercial software. We need a free solution for requirements management. What we want is software that can manage: Required features: Software requirements divided in a structured configurable way Versioning of the requirements (history, diff, etc, like source code) Interdependency of requirements (child of, parent of, related to) Rule Based Access Control for data handling Multi user, multi project File upload (for graph, document related to or so on) Report and extraction features Optional Features: Web Based Test case Time based management (timeline, excepted data, result data) Person allocation and so on Business related stuff Hardware allocation handling I have already play with testlink and now i'm playing with RTH, the next one i try is redmine.

    Read the article

  • How to fix sound in wolfenstein Enemy Territory

    - by GrizzLy
    I installed wolf:et, and i cant get sound to work. Everything that i have installed is in default paths, i had 10.4 and then upgraded to 10.10 via software update gui. I had sound working in 10.04 with method under 2. I have tried following killall esd; et; esd with that i get ------- sound initialization ------- /dev/adsp: No such file or directory Could not open /dev/adsp ------------------------------------ sudo -i echo "et.x86 0 0 direct" /proc/asound/card0/pcm0p/oss echo "et.x86 0 0 disable" /proc/asound/card0/pcm0c/oss exit with that i get bash: /proc/asound/card0/pcm0p/oss: No such file or directory and indeed i do not have that, i have only sub0 and sub1 in pcm0p I have tried running et with et-sdl-sound script, but with that i get this output in console http://pastebin.com/J7gRU1uh I have probably messed up sdl libraries, could not get sound to work, so downloaded new from debian package site and installed them. Tried setting SDL_AUDIODRIVER="pulse" in et-sdl-sound, looks like i am getting same error as in method 3. pasuspender -- et +set s_alsa_pcm plughw:0 gives me ------- sound initialization ------- /dev/adsp: No such file or directory Could not open /dev/adsp _------------------------------------ Misc: @Oli: i do not know if i am running pulse or esd, how can i check that?

    Read the article

  • Data migration - dangerous or essential?

    - by MRalwasser
    The software development department of my company is facing with the problem that data migrations are considered as potentially dangerous, especially for my managers. The background is that our customers are using a large amount of data with poor quality. The reasons for this is only partially related to our software quality, but rather to the history of the data: Most of them have been migrated from predecessor systems, some bugs caused (mostly business) inconsistencies in the data records or misentries by accident on the customer's side (which our software allowed by error). The most important counter-arguments from my managers are that faulty data may turn into even worse data, the data troubles may awake some managers at the customer and some processes on the customer's side may not work anymore because their processes somewhat adapted to our system. Personally, I consider data migrations as an integral part of the software development and that data migration can been seen to data what refactoring is to code. I think that data migration is an essential for creating software that evolves. Without it, we would have to create painful software which somewhat works around a bad data structure. I am asking you: What are your thoughts to data migration, especially for the real life cases and not only from a developer's perspecticve? Do you have any arguments against my managers opinions? How does your company deal with data migrations and the difficulties caused by them? Any other interesting thoughts which belongs to this topics?

    Read the article

  • Reading a large SQL Errorlog

    - by steveh99999
    I came across an interesting situation recently where a SQL instance had been configured with the Audit of successful and failed logins being written to the errorlog. ie This meant… every time a user or the application connected to the SQL instance – an entry was written to the errorlog. This meant…  huge SQL Server errorlogs. Opening an errorlog in the usual way, using SQL management studio, was extremely slow… Luckily, I was able to use xp_readerrorlog to work around this – here’s some example queries..   To show errorlog entries from the currently active log, just for today :- DECLARE @now DATETIME DECLARE @midnight DATETIME SET @now = GETDATE() SET @midnight =  DATEADD(d, DATEDIFF(d, 0, getdate()), 0) EXEC xp_readerrorlog 0,1,NULL,NULL,@midnight,@now   To find out how big the current errorlog actually is, and what the earliest and most recent entries are in the errorlog :- CREATE TABLE #temp_errorlog (Logdate DATETIME, ProcessInfo VARCHAR(20),Text VARCHAR(4000)) INSERT INTO #temp_errorlog EXEC xp_readerrorlog 0 -- for current errorlog SELECT COUNT(*) AS 'Number of entries in errorlog', MIN(logdate) AS 'ErrorLog Starts', MAX(logdate) AS 'ErrorLog Ends' FROM #temp_errorlog DROP TABLE #temp_errorlog To show just DBCC history  information in the current errorlog :- EXEC xp_readerrorlog 0,1,'dbcc'   To show backup errorlog entries in the current errorlog :- CREATE TABLE #temp_errorlog (Logdate DATETIME, ProcessInfo VARCHAR(20),Text VARCHAR(4000)) INSERT INTO #temp_errorlog EXEC xp_readerrorlog 0 -- for current errorlog SELECT * from #temp_errorlog WHERE ProcessInfo = 'Backup' ORDER BY Logdate DROP TABLE #temp_errorlog XP_Errorlog is an undocumented system stored procedure – so no official Microsoft link describing the parameters it takes – however,  there’s a good blog on this here And, if you do have a problem with huge errorlogs – please consider running system stored procedure  sp_cycle_errorlog on a nightly or regular basis.  But if you do this,  remember to change the amount of errorlogs you do retain – the default of 6 might not be sufficient for you….

    Read the article

  • Oracle Configuration Manager for HRMS / EBS Customers

    - by Robert Story
    Upcoming WebcastTitle: Oracle Configuration Manager for HRMS / EBS CustomersDate: April 9, 2010 Time: 11:00 am EDT, 8:00 am PDT, 8:30 pm IST Product Family: EBS HRMS Summary The webcast will focus on Highlights and Benefits of using Oracle Configuration Manager for HRMS / EBS Customers. The one-hour session is recommended for functional / technical EBS HRMS users and system administrators. Along with key highlights of Oracle Configuration Manager, the usage especially in debugging EBS and HRMS issues will be discussed. Topics will include: OCM Overview Data Collection and its usage Key Benefits for HRMS / EBS customers Change History. EBS HRMS Stat Pack. Deployed Customizations. Project management and Mile Stones. Resources & References A short, live demonstration (only if applicable) and question and answer period will be included. Click here to register for this session....... ....... ....... ....... ....... ....... .......The above webcast is a service of the E-Business Suite Communities in My Oracle Support.For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • Change power button to 'Ask' in Xubuntu 13.10

    - by Gully.Moy
    I have recently installed Xubuntu 13.10 on my Vaio vpcea making me a Linux beginner. The problem is that laptop's power button is right on the edge of the bezel making it far too easy to press accidentally, in my opinion a design fault by Sony. At present, when I press the power button it shuts down strait away and as you can imagine, when I'm accidentally pressing it all the time it gets very annoying! So I planned to change it to ask what I would like to do when I press it or at least ask if I'm sure. So I went through the xfce GUI options "Settings Manager" - "Power Manager" to the field "When power button is pressed", but it was already set to "Ask". So I did some digging and found a thread telling me to navigate to /etc/xdg/xfce4/xfconf/xfce-perchannel-xml/xfce4-power-manager.xml where it said to find power-button-action and check that value="3". It already did. So I looked some more and found this thread which focuses on acpi scripts. I tried solution 1 & 2 using sudoedit to change the files accordingly (I have made executable bash shell scripts already so I think I followed them correctly), but still no difference. I also found this thread which instructed me to edit /etc/systemd/logind.conf so that HandlePowerKey=ignore. Still no luck. I even tried my own approach to completely disable /etc/acpi/powerbtn.sh by renaming it powerbtn.sh.bak hoping for at least no response from the power button... and I have done many reboots in between... but still it shuts down! I have also read that some people have the file /etc/acpi/events/power_button, but I do not. So does anyone have any other ideas? What else could be executing the shutdown sequence Is there something I'm missing? I haven't undone any of these actions so every one of the above files is currently edited on my computer, with the exception that "Solution 2" automatically undone "Solution 1" above. Thanks guys.

    Read the article

  • Failed to spawn test

    - by Lost
    Running a simple test in Ubuntu 12.04: sudo lxc-execute -n test /bin/bash -l debug -o outout Got error message: lxc-execute: failed to spawn 'test' cat outout: lxc-execute 1347053658.113 DEBUG lxc_start - sigchild handler set lxc-execute 1347053658.113 INFO lxc_start - 'test' is initialized lxc-execute 1347053658.366 DEBUG lxc_start - Dropping cap_sys_boot and watching utmp lxc-execute 1347053658.366 DEBUG lxc_cgroup - checking '/' (rootfs) lxc-execute 1347053658.366 DEBUG lxc_cgroup - checking '/sys' (sysfs) lxc-execute 1347053658.366 DEBUG lxc_cgroup - checking '/proc' (proc) lxc-execute 1347053658.366 DEBUG lxc_cgroup - checking '/dev' (devtmpfs) lxc-execute 1347053658.366 DEBUG lxc_cgroup - checking '/dev/pts' (devpts) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/run' (tmpfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/' (ext3) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/sys/fs/fuse/connections' (fusectl) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/sys/kernel/debug' (debugfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/sys/kernel/security' (securityfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/run/lock' (tmpfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/run/shm' (tmpfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/run/rpc_pipefs' (rpc_pipefs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/scratch/WAMC-Simulation' (nfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/share' (nfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/proj/WAMC-Simulation' (nfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/users/bhu' (nfs) lxc-execute 1347053658.367 ERROR lxc_start - failed to spawn 'test' Run command: sudo lxc-checkconfig Kernel config /proc/config.gz not found, looking in other places... Found kernel config file /boot/config-2.6.38.7-1.0emulab --- Namespaces --- Namespaces: enabled Utsname namespace: enabled Ipc namespace: enabled Pid namespace: enabled User namespace: enabled Network namespace: enabled Multiple /dev/pts instances: enabled --- Control groups --- Cgroup: enabled Cgroup namespace: enabled Cgroup device: enabled Cgroup sched: enabled Cgroup cpu account: enabled Cgroup memory controller: enabled Cgroup cpuset: enabled --- Misc --- Veth pair device: enabled Macvlan: enabled Vlan: enabled File capabilities: enabled Note : Before booting a new kernel, you can check its configuration usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig What's the problem? Thanks a lot

    Read the article

  • 3ds Max error dialog: "Instancing not supported for this action"

    - by monsto
    "Instancing not supported for this action” is the dialog I get. My favorite part is that, according to google and yahoo, apparently i am the only person in the history of mankind to experience these words together in this order, let along get this message from Max. Thanks, autodesk, for putting this dialog in special for me! So I’ve created my model (nws) and was setting up a Skin Wrap. Selected "Face Deformation", added the base-skin for weight, checked “weight all points”. . . clicked “convert to skin” and got that dialog. My model doesn’t have a whole lot of elements to it, I had a left and right appendage that came from a base model (skyrim). so, i did a clonecopy of all 3 of my elements, just to be sure nothing was instanced… and VOILA! Same error message. the only other elements are an imported NIF mesh and skeleton. Any idea where this is coming from or how I can make it go away so that I can export my mesh?

    Read the article

  • Selecting a JAX-RS implementation for a new project

    - by Fernando Correia
    I'm starting a new Java project which will require a RESTful API. It will be a SaaS business application serving mobile clients. I have developed one project with Java EE 6, but I'm not very familiar with the ecosystem, since most of my experience is on the Microsoft platform. Which would be a sensible choice for a JAX-RS implementation for a new project such as described? Judging by Wikipedia's list, main contenders seem to be Jersey, Apache CXF, RESTeasy and Restlet. But the Comparison of JAX-RS Implementations cited on Wikipedia is from 2008. My first impressings from their respective homepages is that: CXF aims to be a very comprehensive solution (reminds me of WCF in the Microsoft space), which makes me think it can be more complex to understand, setup and debug than what I need; Jersey is the reference implementation and might be a good choice, but it's legacy from Sun and I'm not sure how Oracle is treating it (announcements page doesn't work and last commit notice is from 4 months ago); RESTeasy is from JBoss and probably a solid option, though I'm not sure about learning curve; Restlet seems to be popular but has a lot of history, I'm not sure how up-to-date it is in the Java EE 6 world or if it carries a heavy J2EE mindset (like lots of XML configuration). What would be the merits of each of these alternatives? What about learning curve? Feature support? Tooling (e.g. NetBeans or Eclipse wizards)? What about ease of debugging and also deployment? Is any of these project more up-to-date than the others? How stable are them?

    Read the article

  • enabling a user (created with adduser command) for lightdm graphical login

    - by Basile Starynkevitch
    I just installed Ubuntu 12.04 AMD64 on a new (empty) hard disk (because the previous crashed) Since I am quite familiar with Debian, I created two accounts with the adduser command. Since I am also having an NFSv3 file system, I explictly gave user ids when creating them (for simplicity, I keep the same user id on the home server, running Debian; the user names contain digits; I'm not using LDAP), e.g. # grep bethy /etc/passwd bethy46:x:501:501:Bethy XXX,,,06123456:/home/bethy:/bin/bash # grep bethy /etc/group bethy64:x:501: # grep bethy /etc/shadow bethy46:$6$vQ-wmuchmorethings-2o/:15479:0:99999:7:: Of course /home/bethy exists The actual user name is slightly different, and I am not showing the real entries (for obvious privacy reasons) However, these users don't appear at graphical login prompt (lightdm). And they exist in the system, they have entries in /etc/passwd & /etc/shadow and I (partly) restored their /home I've got no specific user config under /etc/lightdm ; file /etc/lightdm/users.conf mentions # NOTE: If you have AccountsService installed on your system, then LightDM # will use this instead and these settings will be ignored but I have no idea of how to deal with AccountsService thru the command line As you probably guessed, I really dislike doing administrative tasks thru a graphical interface; I much prefer the command line What did I do wrong? How can a user entry not appear in lightdm graphical login? (I need to have my wife's user entry apparent for graphical login). I am not asking how to hide a user, but how to show it in lightdm graphical prompt work-around As I have been told in comments by Nirmik and by Enzotib, lightdm probably don't show any users of uid less than 1024. So I changed all the uid to be more than 8200 (including on the Debian NFS server) and this made all the users visible at the graphical prompt. It is a pain that such a threshold is not really documented.

    Read the article

  • The Oracle VM Hall of Fame

    - by Kristin Rose
    “Take me out to the ball game, take me out to the crowd. Buy me a new Oracle VM, I want my competition to be history!”...Yes, baseball is in full swing, and as we make our way to the closing of the quarter, Oracle is ready to “knock it out of the park” with its newly updated release of Oracle VM 3.1. This home run of a server virtualization solution will let you deploy software faster, as it intelligently manages your entire infrastructure, from application to disk. As if that wasn’t enough, the competition can’t even get on base! Have a look at the final score below: Partners will be hitting grand slams left and right because management tools, application templates and single source support, have all teamed up to create one heck of a curve ball for the competition, but more importantly, an absolute first draft pick for our partners. With no license cost and an affordable enterprise support cost, crowds have gathered to see this ‘All Star’ play some hard ball. Watch as Jeff Doolan, Sr. Director of Linux and Virtualization Channel Sales at Oracle, goes into more depth on how Oracle VM is a real game changer and eliminates the competition.Adding to the line-up are two key components of Oracle VM 3.1: Enhanced Ease-of-use: The new GUI design is engineered for faster execution of workflow and to maximize ease of use and reduce deployment time. Administrators have more time to spend at the ball park or focus on the business.New Oracle VM Templates: such as the Oracle E-Business Suite 12.1.3; Oracle PeopleSoft FSCM 9.1; Oracle Enterprise Manager 12c; Oracle Linux 5.8; Oracle Linux 6.1; Oracle Solaris 11 – which add to the existing 100+ existing templates that are ready for download. Oracle VM Templates are pre-configured as an entire stack including OS and application fully tested, production ready and certified from Oracle.For more information on Oracle newest player, Oracle VM 3.1, read this press release or visit our technology information page. Batter Up,The OPN Communications Team

    Read the article

  • Did Microsoft designers got their butts kicked 3 years ago?

    - by John Conwell
    This is something I've been wondering about for about a year now.  Microsoft has a history of creating very useful products, with lots of useful features.  But useful does not mean usable.  A lot of stuff coming out of Redmond the past 10 years don't really seem to have been well thought out from a user design point of view.  Lots of extra steps, lots of popup windows...very little innovative thinking going on about the user experience of these products.But about a year ago I started seeing changes in the new products coming out of Microsoft.  Windows 7 is a good example of a big change.  They really got their asses handed to them on Vista, so they had to make a change.  But it looks like this change in philosophy has bled over to other areas.  The new Office (2010) lineup has a lot of changes in it to make it way more usable. Given that big changes like this take about 3 years to go from start to actually shipping product, I'm curious what happened internally at Microsoft that really drove this change in product design.  I think that Microsoft got so focused on just adding new functionality for so long, they forgot about the little things that can really make or break a product.  Office 2010 is full of these little things that make it much nicer to use.  I just hope its not too late for them.

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • How can I find out which user deleted a directory?

    - by Rob S.
    My Ubuntu server has about roughly 30 active users on it. I personally know everyone using the server. Recently, a few friends and I were working on project. We made a new directory for the project and since everyone knows everyone we didn't bother protecting our work under a bunch of permissions. We should have though, because we woke up this morning to find that someone removed our entire directory. Our work is backed up every night so it's really not a big deal to restore our work. However, we would like to find out who removed it so we can confront them. So far the best thing we've come up with for finding our culprit is checking everyone's bash history but this is long and tedious and chances are that if there was a malicious intent behind the directory removal that our culprit probably modified theirs to cover their tracks (or of course they might use a different shell). So, basically, what is the easiest and quickest way to find out who deleted a directory? Thanks in advance for your time.

    Read the article

  • Announcing IIS Community Newsletter

    - by steve schofield
    I'm excited to announce a newsletter for the IIS community is available.  Here is the link to signup.  The goal is to cover all happenings in the IIS community.  The goal is to have a monthly newsletter.  Sign-up for IIS Community Newsletterhttp://www.iislogs.com/newsletter/ A little history.  l previously was involved with Brett Hill and authoring the IIS Answers newsletter for about 1 year.  The newsletter literally reached thousands of IIS administrators providing up-to-date Microsoft and community related information.  It was an excellent source of information.  That is my goal with the IIS Community Newsletter. With everything, there is always some "geeking" that goes into it.  I'm using www.StarDeveloper.com newsletter application.   For a modest $10.00 fee, the application provides a simple, yet powerful way to manage newsletters.  I added a CAPTCHA feature to sign-up.  The CAPTCHA module I used was provided free by MONDOR software.  I personally never used one on a application and it was easy to implement.  Thanks for sharing!  Cheers, Steve SchofieldMicrosoft MVP - IIS

    Read the article

  • The Internet of Things & Commerce: Part 3 -- Interview with Kristen J. Flanagan, Commerce Product Management

    - by Katrina Gosek, Director | Commerce Product Strategy-Oracle
    Internet of Things & Commerce Series: Part 3 (of 3) And now for the final installment my three part series on the Internet of Things & Commerce. Post one, “The Next 7,000 Days”, introduced the idea of the Internet of Things, followed by a second post interviewing one of our chief commerce innovation strategists, Brian Celenza.  This final post in the series is an interview with Kristen J. Flanagan, lead product manager for Oracle Commerce omnichannel strategy. She takes us through the past, present, and future of how our Commerce Solution is re-imagining the way physical and digital shopping come together. ------- QUESTION: It’s your job to stay on top of what our customers’ need to not only run their online businesses effectively, but also to make sure they have product capabilities they can innovate and grow on. What key trend has been top-of-mind for you and our customers around this collision of physical and digital shopping? Kristen: I’ll agree with Brian Celenza that hands down mobile has forced a major disruption in shopping and selling behavior. A few years ago, mobile exploded at a pace I don't think anyone was expecting. Early on, we saw our customers scrambling to establish a mobile presence---mostly through "screen scraping" technologies. As smartphones continued to advance (at lightening speed!), our customers started to investigate ways to truly tap in to their eCommerce capabilities to deliver the mobile experience. They started looking to us for a means of using the eCommerce services and capabilities to deliver a mobile experience that is tailored for mobile rather than the desktop experience on a smaller screen. In the future, I think we'll see customers starting to really understand what their shoppers need and expect from a mobile offering and how they can adapt their content and delivery of that content to meet those needs. And, mobile shopping doesn’t stop at the consumer / buyer. Because the in-store experience is compelling and has advantages that digital just can't offer, we're also starting to see the eCommerce services being leveraged for mobile for in-store sales associates. Brick-and-mortar retailers are interested in putting the omnichannel product catalog, promotions, and cart into the hands of knowledgeable associates. Retailers are now looking to connect and harness the eCommerce data in-store so that shoppers have a reason to walk-in. I think we'll be seeing a lot more customers thinking about melding the in-store and digital experiences to present a richer offering for shoppers.    QUESTION: What are some examples of what our customers are doing currently to bring these concepts to reality? Kristen: Well, without question, connecting digital and brick-and-mortar worlds is becoming tablestakes for selling experiences. If a brand has a foot in both worlds (i.e., isn’t a pureplay online retailer), they have to connect the dots because shoppers – whether consumers or B2B buyers –don't think in clearly defined channels anymore. The expectation is connectedness – for on- and offline experiences, promotions, products, and customer data. What does this mean practically for businesses selling goods on- and offline? It touches a lot of systems: inventory info on the eCommerce site, fulfillment options across channels (buy online/pickup in store), order information (representing various channels for a cohesive view of shopper order history), promotions across digital and store, etc.  A few years ago, the main link between store and digital was the smartphone. We all remember when “apps” became a thing and many of our customers were scrambling to get a native app out there. Now we're seeing more strategic thinking around the benefits of mobile web vs. native and how that ties in to the purpose and role of mobile within the digital channel. Put it more broadly, how these pieces fit together in the overall brand puzzle.  The same could be said for “showrooming.” Where it was a major concern (i.e., shoppers using stores to look at merchandise and then order online from Amazon), in recent months, it’s emerged that the inverse is now becoming a a reality as well. "Webrooming" (using digital sites to do research before making a purchase in the store) is a new behavior pure play retailers are challenged with. There are many technologies, behaviors, and information that need to tie together to offer a holistic omnichannel shopping experience. As a result, brands are looking for ways to connect the digital and in-store experiences to bridge the gaps: shared assortments across channels, assisted selling apps that arm associates with information about shoppers, shared promotions, inventory, etc. QUESTION: How has Oracle Commerce been built to help brands make the link between in-store and digital over the last few years? Kristen: Over the last seven years, the product has been in step with the changes in industry needs. Here is a brief history of the evolution: Prior to Oracle’s acquisition of ATG and Endeca, key investments were made to cross-channel functionality that we are still building on today. Commerce Service Center (v2007.1) ATG introduced the Commerce Service Center in 2007.1 and marked the first entry into what was then called “cross-channel.” The Commerce Service Center is a call-center-agent-facing application that enables agents to see shopper orders, online catalog, promotions, and pricing. It is tightly integrated with the eCommerce capabilities of the platform and commerce engine and provided a means of connecting data from the call center and online channels.  REST services framework (v9.1)  In v9.1 we introduced the REST services framework and interface in the Platform that enabled customers to use ATG web services in other applications. This framework has become the basis for our subsequent omni-channel features and functionality. Multisite Architecture (v10) With the v10 release, we introduced the Multisite Architecture, which enabled customers to manage multiple sites (and channels) within a single instance of the BCC. Customers could create site- and channel-specific catalogs, promotions, targeters, and scenarios. Endeca Page Builder (2.x) / Experience Manager (3.x) With the introduction of Endeca for Mobile (now part of the core platform, available through the reference store – see blow) on top of Page Builder (and then eventually Experience Manager), Endeca gave business users the tools to create and manage native and mobile web applications. And since the acquisition of both ATG (2011) and Endeca (2012), Oracle Commerce has leveraged the best of each leading technology’s capabilities for omnichannel commerce to continue to drive innovation for our customers. Service enablement of core Oracle Commerce capabilities (v10.1.1, 10.2, & 11) After the establishment of the REST services framework and interface, we followed up in subsequent releases with service enablement of core Oracle Commerce capabilities throughout the iOS native app and the enablement of the core Commerce Service Center features. The result is that customers can leverage these services for their integrations with other systems, as well as their omnichannel initiatives.  Mobile web reference application (v10.1) In 10.1 we introduced the shopper-facing mobile reference application that showed how to use Oracle Commerce to deliver a mobile web experience for shoppers. This included the use of Experience Manager and cartridges to drive those experiences on select pages.  Native (iOS) reference application (v10.1.1)  We came out with the 10.1.1 shopper-facing native iOS ref app that illustrated how to use the Commerce REST services to deliver an iOS app. Also included Experience Manager-driven pages.   Assisted Selling reference application (v10.2.1)  The Assisted Selling reference application is our first reference application designed for the in-store associate. This iOS app shows customers how they can use Oracle Commerce data and information to provide a high-touch, consultative sales environment as well as to put the endless aisle into hands of their associates. Shoppers can start a cart online, and in-store associates can access that cart via the application to provide more information or add products and then transact using the ATG engine. Support for Retail promotions (v11) As part of the v11 release, we worked with teams in the Oracle Retail Global Business Unit (RGBU) to assess which promotion types and capabilities are supported across our products. Those products included Oracle Commerce, Oracle Point of Service (ORPOS), and Oracle Retail Price Management (RPM). The result is that customers can now more easily support omnichannel use cases between the store and digital.  Making sure Oracle Commerce can help support the omnichannel needs of our customers is core to our product strategy. With 89% of consumers now use two or more channels to make a single purchase, ensuring that cross-channel interactions are linked is critical to a great customer experience – and to sales. As Oracle Commerce evolves, we want to make it simple for organizations to create, deliver, and scale experiences across touchpoints with our create once, deploy commerce anywhere framework. We have a flexible, services-oriented architecture that allows data, content, catalogs, cart, experiences, personalization, and merchandising to be shared across touchpoints and easily extended in to new environments like mobile, social, in-store, Call Center, and new Websites. [For the latest downloads and Oracle Commerce documentation, please visit the Oracle Technical Network.] ------ Thank you to both Brian and Kristen for their contributions and to this blog series and their continued thought leadership for Oracle Commerce. We are all looking forward to the coming years of months of new shopping behaviors and opportunities to innovate. Because – if the digital fabric of our everyday lives continues to change at the same pace – the next five years (that just under 2,000 days), will be dramatic. ---------- THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY AND MAY NOT BE INCORPORATED INTO A CONTRACT OR AGREEMENT

    Read the article

  • An Interesting Perspective on Oracle's Mobile Strategy

    - by Carlos Chang
    Oracle’s well known for being an acquisitive company. On average, I think we acquire about 1 company a month. (don’t quote me, I didn't run the numbers)  With all the excitement around mobile, mobile and wait for it… mobile, well, you know...what' s up with that? Well, just to be clear and quote Schultz from Hogan's Heroes "I know nothing! Nothing! "  But I did recently run across this blog by Kevin Benedict over at mobileenterprisestrategies.com covering this very topic, Oracle Mobility Emerges Prepared for the Future,  a little (fair use) snippet here:"History, however, may reward Oracle's patience.  While veteran mobile platform vendors (including SAP) have struggled to keep up with the fast changing market, R&D investment requirements, the fickle preferences of mobile developers, and the emergence of cloud-based mobile services, Oracle has kept their focus on supporting mobile developers with integration services and tools that extend their solutions out to mobile apps.”It’s an interesting read, and I would encourage you to check it out here.   BTW, if you’re a Twitter user, follow our new account @OracleMobile To the first ten thousand followers, I bequeath you my sincere virtual thanks and gratitude. :)  For the dedicated mobile blog, go to blogs.oracle.com/mobile.

    Read the article

  • DB API for shell scripting (any shell)

    - by foampile
    I am faced with some legacy shell scripts that run batch data processing jobs in Oracle using SQL+. For the most part, the data tier does not have to communicate back to the script with retrieved data to be passed for shell-level processing but in a few cases it does. The problem is, SQL+ is really meant to be an end user app and not an API that can communicate with other clients programmaticaly. That is why people have invented APIs such as DBD::DBI for Perl, JDBC for Java, ODBC etc. The way it is done is they invoke SQL+ and then parse the output, which is clearly designed for human eye consumption, using tools like sed and awk. The whole thing is at best a hack and very prone to bugs. Since this client is rather conservative with their technology, they don't want to scale their scripts up to Perl or Python where there are data access APIs. So I am wondering whether there are similar APIs for shell, e.g. K or bash. What I would like is if an API would return data in a 2-dimensional array or strings (for the lack of type setting) so that I can just read DB data like that. The way they do it now is akin to parsing regular web page HTML to get a single stock quote rather than cleanly calling a web service and be done with it. Anybody know of a product I can use? Thanks

    Read the article

  • Ubuntu and racadm

    - by lmqcn
    I recently purchased a used poweredge 1850 server and it came with a DRAC card. After wiping the HDD and installing ubuntu server 12.04.3 LTS amd64 on it, I am now trying to gain access to the DRAC which I believe is version 4. I have properly configured the DRAC to use it's own IP on my LAN and when I point my browser to the IP address, I am greeted with the DRAC login page (it has the dell logo and everything). However, after trying the credentials of root/calvin, I was denied access. So I think that the previous owners had set their own password. After doing some reading, it appears that I can reset the credentials to the default using racadm config -g cfgUserAdmin -o cfgUserAdminPassword -i 1 newpassword but upon entering the command, I get this error: bash: /usr/sbin/racadm: No such file or directory This holds true even if I run sudo su prior to running the racadm command. If, however, I run sudo racadm config -g cfgUserAdmin -o cfgUserAdminPassword -i 1 newpassword there are no errors. Yet, when I try to log into the DRAC via the web interface using the credentials of root/newpassword I am still not granted access. I installed the dell utilities via the guide at https://wiki.ubuntu.com/HardwareSupportMachinesServersDellNotes. I first tried to install the 64 bit version that is on the dell repositories, but after that was unsuccessful, I just followed the guide verbatim. No errors were produced in either case. I even followed the information at the bottom of the guide by executing sudo pppd /dev/ttyS1 1382400 crtscts noipdefault noauth lock persist connect 'chat -v "" CLIENT CLIENTSERVER "\\c"' but obviously, replacing the /dev/ttyS1 with the correct information for my system. ls -l /usr/sbin/ | grep racadm yields -rwxr-xr-x 1 root root 87930 Sep 16 04:03 racadm I have tried these credentials after each attempt of changing the password: root/calvin root/newpassword admin/calvin admin/newpassword All have been unsuccessful. What is the next course of action that I should take?

    Read the article

< Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >