Search Results

Search found 11288 results on 452 pages for 'git status'.

Page 351/452 | < Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >

  • NVidia with Optimus conflicting in Ubuntu 12.04

    - by Humannoise
    i have recently installed Ubuntu 12.04 in a Intel Ivy Bridge with integrated graphics and NVidia GPU with Optimus tech, however i cant manage it to work properly. I have already passed by the solution of bumblebee project, however iam got the following message when try to run anything with nvidia card( e.g. with optirun firefox): [ERROR]The Bumblebee daemon has not been started yet or the socket path /var/run/bumblebee.socket was incorrect. [ERROR]Could not connect to bumblebee daemon - is it running? Since the nvidia card is not working properly, some softwares like Scilab, that make use of X11 system for graphic handling and plotting, wont work too. my bios has no option concerning graphics card and the log of daemon returned: Jul 5 16:10:51 humannoise-W251ESQ-W270ESQ bumblebeed[980]: Module 'nvidia' is not found. Jul 5 16:10:51 humannoise-W251ESQ-W270ESQ kernel: [ 17.943272] init: bumblebeed main process (980) terminated with status 1 Jul 5 16:10:51 humannoise-W251ESQ-W270ESQ kernel: [ 17.943288] init: bumblebeed main process ended, respawning Jul 5 16:10:51 humannoise-W251ESQ-W270ESQ bumblebeed[1026]: Module 'nvidia' is not found. The lspci -nn | grep '\[030[02]\]:' returned: 00:02.0 VGA compatible controller [0300]: Intel Corporation Ivy Bridge Graphics Controller [8086:0166] (rev 09) 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:0de9] (rev a1) Ok, for the command dpkg -l | grep '^ii' | grep nvidia i got : ii bumblebee-nvidia 3.0-2~preciseppa1 nVidia Optimus support using the proprietary NVIDIA driver ii nvidia-current 302.17-0ubuntu1~precise~xup1 NVIDIA binary Xorg driver, kernel module and VDPAU library ii nvidia-current-updates 295.49-0ubuntu0.1 NVIDIA binary Xorg driver, kernel module and VDPAU library ii nvidia-settings 302.17-0ubuntu1~precise~xup3 Tool of configuring the NVIDIA graphics driver ii nvidia-settings-updates 295.33-0ubuntu1 Tool of configuring the NVIDIA graphics driver After full reinstallation, including the remove of any previous nvidia drive, lsmod | grep -E 'nvidia|nouveau' returned: nvidia 10888310 46 dmesg | grep -C3 -E 'nouveau|NVRM' returned things like: [ 1875.607283] nvidia 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 1875.607289] nvidia 0000:01:00.0: setting latency timer to 64 [ 1875.607293] vgaarb: device changed decodes: PCI:0000:01:00.0,olddecodes=io+mem,decodes=none:owns=none [ 1875.607363] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 302.17 Tue Jun 12 16:03:22 PDT 2012 [ 1884.830035] nvidia 0000:01:00.0: PCI INT A disabled [ 1884.832058] bbswitch: disabling discrete graphics [ 1884.832960] bbswitch: Result of Optimus _DSM call: 09000019 Some programs, like Scilab, are now working fine under optirun(e.g. >optirun scilab) call. Thank you.

    Read the article

  • Do you care about your Oracle System Support experience?

    - by user12244613
    It has been a while since I blogged about Systems Support within Oracle. I want to take this opportunity to raise awareness of how Oracle is communicating out to its systems customers. Previously every item to be communicated was sent independently via an email message however, not all messages appear to be being getting the attention they require. In an effort to ensure Oracle is reaching all of our Sun and Oracle System customers, we have created the Oracle Systems Support Newsletter. This monthly newsletter will have a summary of customer support relevant information for you to use and will cover topics that impact your support experience. For example: 1. Did you know that sending explorer content to email addresses with @sun.com is going away soon? For more information, review the Document 1362484.1 2. Are you an Auto Service Request (ASR) user? If yes, here are the latest changes: · ASR Manager accepts My Oracle Support User Name (email address) and password. [Doc ID 1345484.1] · ASR IP Address for secure file transfer has changed [Doc ID 1338575.1] · ASR No Heartbeat Status - Find out how to resolve [Doc ID 1346328.1] 3. Did you notice we have changed the Service Request options for Hardware and introduced a new problem category called “Automated Diagnosis”? This service streamlines the data you send in and then automatically provides an update of known issues found in your My Oracle Support Service Request. This feature also fast tracks hardware failures by sending parts as soon as the data is analyzed. Have you used this new feature? If yes tell us about it – take the 5minute survey 4. Are you being proactive or are you still ‘fire fighting’ in the reactive mode? If you are being proactive for your Oracle System products you might have used Oracle Sun System Analysis. Did you finding this helpful? Can we improve it? You tell us, take the 5minute survey 5. Are you aware that if you attach files to your Service Request it enables the support engineer to start work straight away? For a summary of products and files review the Newsletter. 6. Are you struggling to find patches or firmware or product downloads? If yes, these types of issues are all addressed in the Newsletter. If this is the type of information you want to know about each month, then take time to read the Newsletter link and bookmark it in My Oracle Support so you can stay informed. Thanks for your time.

    Read the article

  • Upgrading to Ubuntu 11.04 failed

    - by Rupert
    Today Ubuntu asked me to upgrade to 11.04. The installation went completely fine until right at the end when the following packages failed: install-info ubuntu-standard The installer hung so I had to shut it down manually. Ubuntu still works fine but it says that the upgrade didn't work properly so I am hesitant to restart it until I have resolved the problem in case I can't get back in. I am running Ubuntu inside the latest version of Virtual Box and was previously running version 10.10. I have tried installing install-info manually with apt-get but I get the following error: Unhandled exception: [#<SystemStackError: stack level too deep>] /usr/local/ruby/lib/ruby/gems/1.9.1/gems/ZenTest-4.5.0/lib/autotest.rb:842:in `block in <class:Autotest>': undefined method `backtrace' for [#<SystemStackError: stack level too deep>]:Array (NoMethodError) from /usr/local/ruby/lib/ruby/gems/1.9.1/gems/ZenTest-4.5.0/lib/autotest.rb:828:in `[]' from /usr/local/ruby/lib/ruby/gems/1.9.1/gems/ZenTest-4.5.0/lib/autotest.rb:828:in `block in hook' from /usr/local/ruby/lib/ruby/gems/1.9.1/gems/ZenTest-4.5.0/lib/autotest.rb:828:in `each' from /usr/local/ruby/lib/ruby/gems/1.9.1/gems/ZenTest-4.5.0/lib/autotest.rb:828:in `any?' from /usr/local/ruby/lib/ruby/gems/1.9.1/gems/ZenTest-4.5.0/lib/autotest.rb:828:in `hook' from /usr/local/ruby/lib/ruby/gems/1.9.1/gems/ZenTest-4.5.0/lib/autotest.rb:344:in `rescue in run' from /usr/local/ruby/lib/ruby/gems/1.9.1/gems/ZenTest-4.5.0/lib/autotest.rb:320:in `run' from /usr/local/ruby/lib/ruby/gems/1.9.1/gems/ZenTest-4.5.0/lib/autotest.rb:241:in `run' from /usr/local/ruby/lib/ruby/gems/1.9.1/gems/ZenTest-4.5.0/bin/autotest:6:in `<top (required)>' from /usr/local/ruby/bin/autotest:19:in `load' from /usr/local/ruby/bin/autotest:19:in `<main>' dpkg: error processing install-info (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of info: info depends on install-info; however: Package install-info is not configured yet. dpkg: error processing info (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of ubuntu-standard: ubuntu-standard depends on info; however: Package info is not configured yet. dpkg: error processing ubuntu-standard (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates it's a follow-up error from a previous failure. No apport report written because the error message indicates it's a follow-up error from a previous failure. Errors were encountered while processing: install-info info ubuntu-standard E: Sub-process /usr/bin/dpkg returned an error code (1) Any ideas on what I should try next?

    Read the article

  • Best practice for marking a bug as resolved in Bugzilla ?

    - by Vincent B.
    I am wondering what is the best way to handle the situation of marking a bug as resolved and providing a version of component/product in which this fix can be found. Context For a project I am working on, we are using Bugzilla for issue tracking, and we have the following: A product "A" with a version number like vA.B.C.D, This product "A" have the following components: Component "C1" with a version number like vA.B.C.D, Component "C2" with a version number like vA.B.C.D, Component "C3" with a version number like vA.B.C.D. Internally we keep track of which component versions have been used to generate the product A version vA.B.C.D. Example: Product "A" version v1.0.0.0 has been produced from component "C1" v1.0.0.3, component "C2" v1.3.0.0 and component "C3" v2.1.3.5. And Product "A" version v1.0.1.0 has been produced from component "C1" v1.0.0.4, component "C2" v1.3.0.0 and component "C3" v2.1.3.5. Each component is a SVN repository. The person in charge of generating the product "A" have only access to the different components tags folder in SVN, and not the trunk of each component repository. Problem Now the problem is the following, when a bug is found in the product "A", and that the bug is related to Component "C1", the version of product "A" is chosen (e.g. v1.0.0.0), and this version allow the developer to know which version of component "C1" has the bug (here it will be v1.0.0.3). A bug report is created. Now let's say that the developer responsible for component "C1" corrects the bug, then when the bug seems to be fixed and after some test and validation, the developer generates a new tag for component "C1", with the version v1.0.0.4. At this time, the developer of component "C1" needs to update the bug report, but what is the best to do: Mark the bug as resolved/fixed and add a comment saying "This bug has been fixed in the tags v1.0.0.4 of C1 component" ? Keep the bug as assigned, add a comment saying "This bug has been fixed in the tags v1.0.0.4 of C1 component, update this bug status to resolved for the next version of the product that will be generated with the newest version (v1.0.0.4 of C1)" ? Another possible way to deal with this problem. Right now the problem is that when a product component CX is fixed, it is not sure in which future version of the product A it will be included, so it is for me not possible to say in which version of the product it will be solved, but it is possible to say in which version of the Component CX it has been solved. So when do we need to mark a bug as solved, when the product A version include the fixed version of CX, or only when CX component has been fixed ? Thanks for your personal feedback and ideas about this !

    Read the article

  • Ubuntu Dual Screen Using Virtual Machine - AMD GPU

    - by Chris
    I've been searching online and reading tutorials and etc about how to make my ubuntu VM dual screen(x86_64). I have first tried to run these commands: sudo aticonfig --initial -f which gave me the ouput of: sudo: aitconfig: command not found I then googled the output and followed these instructions that I tells me to install my ATI drivers onto my ubuntu. wget http://www2.ati.com/drivers/linux/ati-driver-installer-11-5-x86.x86_64.run sudo sh ati-driver-installer-11-5-x86.x86_64.run --buildpkg Ubuntu/natty sudo dpkg -i *.deb sudo apt-get -f install sudo aticonfig -f --initial --adapter=all sudo reboot It all works well until I input sudo apt-get -f install which gives me the following output: sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 25 not upgraded. 3 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up fglrx (2:8.850-0ubuntu1) ... update-alternatives: error: alternative link /usr/bin/aticonfig is already managed by x86_64-linux-gnu_gl_conf. dpkg: error processing fglrx (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of fglrx-amdcccle: fglrx-amdcccle depends on fglrx; however: Package fglrx is not configured yet. dpkg: error processing fglrx-amdcccle (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of fglrx-dev: fglrx-dev depends on fglrx; however: Package fglrx is not configured yet. dpkg: error processing fglrx-dev (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: fglrx fglrx-amdcccle fglrx-dev E: Sub-process /usr/bin/dpkg returned an error code (1) At this point, I don't know what to do since running: gksudo amdcccle For the record, I have 3D acceleration turned on. The following is my GPU for my VM lspci | grep VGA 00:02.0 VGA compatible controller: InnoTek Systemberatung GmbH VirtualBox Graphics Adapter Any Help on how I can make my VM dual screen with Ubuntu would be great. Thank you in advance.

    Read the article

  • Developer Day @ OOP 2001with SOA Specialized Partners

    - by Jürgen Kress
    Oracle SOA Specialized Partners like Opitz Consulting participate in our key marketing events. Therefore make sure that you start your journey to SOA Specialization! ORACLE Developer Day auf der OOP: Entdecken Sie die Einsatzmöglichkeiten und Leistungsfähigkeit der Java-Technologie! incl. Live Hacking mit Special Guest: JAVA Guru Adam Bien! Enterprise-Anwendungen leicht gemacht! Beschleunigen Sie Ihre Entwicklung mit Java. Kommen Sie zum kostenlosen Ganztages-Workshop von ORACLE auf der OOP und lernen Sie die Leistungsfähigkeit von Java kennen. Erfahren Sie mehr über die Java Strategie und die Produktroadmap, welche Einsatzmöglichkeiten Java SE für Embedded erschließt und wie sich eine SOA und BPM-Lösung auf der Basis von Java realisieren lässt. Die vielfältigen Verbesserungen von Java EE6 erleichtern den Entwicklern das Leben erheblich. Kennen Sie bereits das Potential von Java EE6? Adam Bien wird Sie mit einem Live-Hacking von den Stühlen reißen. Torsten Winterberg, Oracle Fusion Middleware ACE Director und Danilo Schmiedel stellen vor wie Java Entwickler die Oracle SOA & BPM Lösungen einbinden können. Am Nachmittag können Sie dann in einer Hands-On Session mit Ihrem eigenen Laptop Java Persistence API, Java Beans, CDI und weitere Technologien ausprobieren. In diesem kostenlosen Workshop von Oracle können Sie sich mit Gleichgesinnten austauschen, sich die neueste Technik direkt von den Oracle Experten zeigen lassen und an praktischen Programmierübungen teilnehmen. Auf dieser Veranstaltung sind Sie richtig, wenn Sie mehr über den aktuellen Status der Java Roadmap wissen wollen, mehr über Java Technologie- und Lösungen (Java SE, ME, etc) erfahren wollen, die Plattform Java EE erproben, die Vorteile der Java EE 6 für Ihre Arbeit verstehen möchten, wenn Sie auf eine Enterprise-Landschaft hochskalieren wollen, mit Java Server Faces Front-Ends erstellen, neue Entwicklungsprojekte planen oder gerade in Angriff annehmen. Registrieren Sie sich jetzt!   ICM - Internationales Congress Center München Am Messesee, Trudering-Riem 81829 München 27. Januar 2011 9.00 Uhr - 16.30 Uhr For more information on SOA Specialization and the SOA Partner Community please feel free to register at www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website Technorati Tags: OOP,Adam Bien,Torsten Winterberg,Opitz Consulting,Oracle,SOA,SOA Specialization,OPN

    Read the article

  • Code review recommendations and Code Smells

    - by Michael Freidgeim
    Some time ago Twitter told that I am similar to Boris Lipschitz . Indeed he is also .Net programmer from Russia living in Australia. I‘ve read his list of Code Review points and found them quite comprehensive. A few points  were not clear for me, and it forced me for a further reading.In particular the statement “Exception should not be used to return a status or an error code.” wasn’t fully clear for me, because sometimes we store an exception as an object with all error details and I believe it’s a valid approach. However I agree that throwing exceptions should be avoided, if you expect to return error as a part of a normal flow. Related link: http://codeutopia.net/blog/2010/03/11/should-a-failed-function-return-a-value-or-throw-an-exception/ Another point slightly puzzled me“If Thread.Sleep() is used, can it be replaced with something else, ei Timer, AutoResetEvent, etc” . I believe, that there are very rare cases, when anyone using Thread.Sleep in any production code. Usually it is used in mocks and prototypes.I had to look further to clarify “Dependency injection is used instead of Service Location pattern”.Even most of articles has some preferences to Dependency injection, there are also advantages to use Service Location. E.g see http://geekswithblogs.net/KyleBurns/archive/2012/04/27/dependency-injection-vs.-service-locator.aspx. http://www.cookcomputing.com/blog/archives/000587.html  refers to Concluding Thoughts of Martin Fowler The choice between Service Locator and Dependency Injection is less important than the principle of separating service configuration from the use of services within an applicationThe post had a link to excellent article Code Smells of Jeff Atwood, but the statement, that “code should not pass a review if it violates any of the  code smells” sound too strict for my environment. In particular, I disagree with “Dead Code” recommendation “Ruthlessly delete code that isn't being used. That's why we have source control systems!”. If there is a chance that not used code will be required in a future, it is convenient to keep it as commented or #if/#endif blocks with appropriate explanation, why it could be required in the future. TFS is a good source control system, but context search in source code of current solution is much easier than finding something in the previous versions of the code.I also found a link to a good book “Clean Code.A.Handbook.of.Agile.Software”

    Read the article

  • Bluetooth firmware problem in Ubuntu 13.04

    - by chanzerre
    I have a [Dell Inspiron][1] 15R 5520 laptop. Bluetooth is not working at all. rfkill list all gives 0: hci0: Bluetooth Soft blocked: no Hard blocked: no 1: phy0: Wireless LAN Soft blocked: no Hard blocked: no 2: brcmwl-0: Wireless LAN Soft blocked: no Hard blocked: no dmesg|grep -i bluetooth gives [ 13.644428] Bluetooth: Core ver 2.16 [ 13.644445] Bluetooth: HCI device and connection manager initialized [ 13.644453] Bluetooth: HCI socket layer initialized [ 13.644455] Bluetooth: L2CAP socket layer initialized [ 13.644461] Bluetooth: SCO socket layer initialized [ 15.861363] Bluetooth: hci0 command 0x1003 tx timeout [ 15.903443] Bluetooth: can't load firmware, may not work correctly [ 17.332535] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [ 17.332538] Bluetooth: BNEP filters: protocol multicast [ 17.332544] Bluetooth: BNEP socket layer initialized [ 17.393768] Bluetooth: RFCOMM TTY layer initialized [ 17.393781] Bluetooth: RFCOMM socket layer initialized [ 17.393783] Bluetooth: RFCOMM ver 1.11 hciconfig gives hci0: Type: BR/EDR Bus: USB BD Address: E0:06:E6:D5:DB:46 ACL MTU: 1021:8 SCO MTU: 64:1 UP RUNNING PSCAN ISCAN RX bytes:687 acl:0 sco:0 events:56 errors:0 TX bytes:2024 acl:0 sco:0 commands:52 errors:0 I have visited the site http://wireless.kernel.org/en/users/Drivers/b43 and according to it lspci -vnn -d 14e4: gives 08:00.0 Network controller [0280]: Broadcom Corporation BCM43142 802.11b/g/n [14e4:4365] (rev 01) Subsystem: Dell Wireless 1704 802.11n + BT 4.0 [1028:0016] Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at c1500000 (64-bit, non-prefetchable) [size=32K] Capabilities: <access denied> Kernel driver in use: wl So I got my PCI-ID as 14e4:4365 which it says is not supported. The alternative is wl. What should I do? My Wi-Fi is working normally without any problems, but Bluetooth is not working. sudo dpkg -i wireless-bcm43142-dkms_6.20.55.19-1_amd64.deb gives following error (Reading database ... 208543 files and directories currently installed.) Unpacking wireless-bcm43142-dkms (from wireless-bcm43142-dkms_6.20.55.19-1_amd64.deb) ... Setting up wireless-bcm43142-dkms (6.20.55.19-1) ... Loading new wireless-bcm43142-6.20.55.19 DKMS files... Building only for 3.8.0-23-generic Building initial module for 3.8.0-23-generic Traceback (most recent call last): File "/usr/share/apport/package-hooks/dkms_packages.py", line 22, in <module> import apport ImportError: No module named apport Error! Bad return status for module build on kernel: 3.8.0-23-generic (x86_64) Consult /var/lib/dkms/wireless-bcm43142/6.20.55.19/build/make.log for more information.

    Read the article

  • Tales of a corrupt SQL log

    - by guybarrette
    Warning: I’m a simple dev, not an all powerful DBA with godly powers. This morning, one of my sites was down and DNN reported a problem with the database.  A quick series of tests revealed that the culprit was a corrupted log file. Easy fix I said, I have daily backups so it’s just a mater of restoring a good copy of the database and log files.  Well, I found out that’s not exactly true.  You see, for this database, I have daily file backups and these are not database backups created by SQL Server. So I restored a set of files from a couple of days ago, stopped the SQL service, copied the files over the bad ones, restarted the service only to find out that SQL doesn’t like when you do that.  It suspects something fishy and marks the database as suspect.  A database marked as suspect can’t be accessed at all.  So now what? I searched throughout the tubes of the InterWeb and found that you can restore from a corrupted log file by creating a new database with the same name as the defective one, then copy the restored database file (the one with data) over the newly created one.  Sweet!  But you still end up with SQL marking the database as suspect but at least, the newly created log is OK.  Well not true, it’s not corrupted but the lack of data makes it not OK for SQL so you need to rebuild the log.  How can you do that when SQL blocks any action the database?  First, you need to change the database status from suspect to emergency.  Then you need to set the database for single access only.  After that, you need to repair the log with DBCC and do the DBA dance.  If you dance long enough, SQL should repair the log file.  Now you need to set the access back to multi user.  Here’s the T-SQL script: use master GO EXEC sp_resetstatus 'MyDatabase' ALTER DATABASE MyDatabase SET EMERGENCY Alter database MyDatabase set Single_User DBCC checkdb('MyDatabase') ALTER DATABASE MyDatabase SET SINGLE_USER WITH ROLLBACK IMMEDIATE DBCC CheckDB ('MyDatabase', REPAIR_ALLOW_DATA_LOSS) ALTER DATABASE MyDatabase SET MULTI_USER So I guess that I would have been a lot easier to restore a SQL backup.  I can’t really say but the InterWeb seems to say so.  Anyway, lessons learned: Vive la différence: File backups are different then SQL backups. Don’t touch me: SQL doesn’t like when you restore a file over a corrupted one. The more the merrier: You should do both SQL and file backups. WTF?: The InterWeb provides you with dozens of way to deal with the problem but many are SQL 2000 or SQL 2005 only, many are confusing and many are written in strange dialects only DBAs understand. var addthis_pub="guybarrette";

    Read the article

  • LCM says Smart List import is complete but actually its not...Here are probable reasons

    - by RahulS
    First of all some basics of smart list: Administrators use Smart Lists to create custom drop-down lists that users access from data form cells. When clicking in cells whose members are associated with a Smart List (as a member property), users select items from drop-down lists instead of entering data. Users cannot type in cells that contain Smart Lists. Smart Lists display in cells as down arrows that expand when users click into the cells. Below link will give you more information on Smart Lists: http://download.oracle.com/docs/cd/E17236_01/epm.1112/hp_admin/enum_pg.html I got a simple query today, "The LCM process generates and indicates a status of "Complete", however, the 3,018 records do not appear in the Planning application. No error exists in the log to identify the problem." Things which can be checked in this case: 1. Spaces are not allowed in the Entry Name, 2. Spaces are allowed in the Entry Label, 3. The name must start with an alpha character or underscore, 4. Valid characters for the remaining part of the name must be alpha, numeric, or an underscore, 5. Enter a name that is unique within the smart list, 6. I am not sure about the limits but I have seen 22,000 members loaded fine, 7. ID for every entry should be unique,  8. IDs need not to be consecutive, Ex: It was go from 1 to 100 then 500 to 900 then 1900 to 4500 etc. While import .xml file using LCM there were no errors in the foundation and LCM migration logs, but when checked in the HyS9PlanningSysErr.log, few errors were found:Ex: The name Data_Coord_(Prod)_ACS is invalid, The name Sr_Dir_b+Medcd_Gvt_Rel_Sls_Mkt is invalid, The name entered is invalid. Enter a name that is unique within the smart list, Also, we can Load Smart List dimensions and Smart List dimension entries using the /DS:HSP_SMARTLISTS parameter in outlineload utility: OutlineLoad /A:acpt /U:admin /M /I:c:/smartlist_create1.csv /DS:HSP_SMARTLISTS /L:c:/OutlineLogs/outlineLoad.log /X:c:/OutlineLogs/outlineLoad.exc SmartList Name, Operation, Label, Display Order, Missing Label, Use Form Missing Label, Entry ID, Entry Name, Entry Label SL1,addsmartlist,SL1Label,,,,,, SL1,addEntry,,,,,,entry1,entrylabel1 SL1,addEntry,,,,,,entry2,entrylabel2 Cheers..!!! Rahul S. http://www.facebook.com/pages/HyperionPlanning/117320818374228

    Read the article

  • AppHarbor - Azure Done Right AKA Heroku for .NET

    - by Robz / Fervent Coder
    Easy and Instant deployments and instant scale for .NET? Awhile back a few of us were looking at Ruby Gems as the answer to package management for .NET. The gems platform supported the concept of DLLs as packages although some changes would have needed to happen to have long term use for the entire community. From that we formed a partnership with some folks at Microsoft to make v2 into something that would meet wider adoption across the community, which people now call NuGet. So now we have the concept of package management. What comes next? Heroku Instant deployments and instant scaling. Stupid simple API. This is Heroku. It doesn’t sound like much, but when you think of how fast you can go from an idea to having someone else tinker with it, you can start to see its power. In literally seconds you can be looking at your rails application deployed and online. Then when you are ready to scale, you can do that. This is power. Some may call this “cloud-computing” or PaaS (Platform as a Service). I first ran into Heroku back in July when I met Nick of RubyGems.org. At the time there was no alternative in the .NET-o-sphere. I don’t count Windows Azure, mostly because it is not simple and I don’t believe there is a free version. Heroku itself would not lend itself well to .NET due to the nature of platforms and each language’s specific needs (solution stack).  So I tucked the idea in the back of my head and moved on. AppHarbor Enters The Scene I’m not sure when I first heard about AppHarbor as a possible .NET version of Heroku. It may have been in November, but I didn’t actually try it until January. I was instantly hooked. AppHarbor is awesome! It still has a ways to go to be considered Heroku for .NET, but it already has a growing community. I created a video series (at the bottom of this post) that really highlights how fast you can get a product onto the web and really shows the power and simplicity of AppHarbor. Deploying is as simple as a git/hg push to appharbor. From there they build your code, run any unit tests you have and deploy it if everything succeeds. The screen on the right shows a simple and elegant UI to getting things done. The folks at AppHarbor graciously gave me a limited number of invites to hand out. If you are itching to try AppHarbor then navigate to: https://appharbor.com/account/new?inviteCode=ferventcoder  After playing with it, send feedback if you want more features. Go vote up two features I want that will make it more like Heroku. Disclaimer: I am in no way affiliated with AppHarbor and have not received any funds or favors from anyone at AppHarbor. I just think it is awesome and I want others to know about it. From Zero To Deployed in 15 Minutes (Or Less) Now I have a challenge for you. I created a video series showing how fast I could go from nothing to a deployed application. It could have been from Zero to Deployed in Less than 5 minutes, but I wanted to show you the tools a little more and give you an opportunity to beat my time. And that’s the challenge. Beat my time and show it in a video response. The video series is below (at least one of the videos has to be watched on YouTube). The person with the best time by March 15th @ 11:59PM CST will receive a prize. Ground rules: .NET Application with a valid database connection Start from Zero Deployed with AppHarbor or an alternative A timer displayed in the video that runs during the entire process Video response published on YouTube or acceptable alternative Video(s) must be published by March 15th at 11:59PM CST. Either post the link here as a comment or on YouTube as a response (also by 11:59PM CST March 15th) From Zero To Deployed In 15 Minutes (Or Less) Part 1 From Zero To Deployed In 15 Minutes (Or Less) Part 2 From Zero To Deployed In 15 Minutes (Or Less) Part 3

    Read the article

  • Best practice while marking a bug as resolved with Bugzilla (versioning of product and components)

    - by Vincent B.
    I am wondering what is the best way to handle the situation of marking a bug as resolved and providing a version of component/product in which this fix can be found. Context For a project I am working on, we are using Bugzilla for issue tracking, and we have the following: A product "A" with a version number like vA.B.C.D, This product "A" have the following components: Component "C1" with a version number like vA.B.C.D, Component "C2" with a version number like vA.B.C.D, Component "C3" with a version number like vA.B.C.D. Internally we keep track of which component versions have been used to generate the product A version vA.B.C.D. Example: Product "A" version v1.0.0.0 has been produced from component "C1" v1.0.0.3, component "C2" v1.3.0.0 and component "C3" v2.1.3.5. And Product "A" version v1.0.1.0 has been produced from component "C1" v1.0.0.4, component "C2" v1.3.0.0 and component "C3" v2.1.3.5. Each component is a SVN repository. The person in charge of generating the product "A" have only access to the different components tags folder in SVN, and not the trunk of each component repository. Problem Now the problem is the following, when a bug is found in the product "A", and that the bug is related to Component "C1", the version of product "A" is chosen (e.g. v1.0.0.0), and this version allow the developer to know which version of component "C1" has the bug (here it will be v1.0.0.3). A bug report is created. Now let's say that the developer responsible for component "C1" corrects the bug, then when the bug seems to be fixed and after some test and validation, the developer generates a new tag for component "C1", with the version v1.0.0.4. At this time, the developer of component "C1" needs to update the bug report, but what is the best to do: Mark the bug as resolved/fixed and add a comment saying "This bug has been fixed in the tags v1.0.0.4 of C1 component" ? Keep the bug as assigned, add a comment saying "This bug has been fixed in the tags v1.0.0.4 of C1 component, update this bug status to resolved for the next version of the product that will be generated with the newest version (v1.0.0.4 of C1)" ? Another possible way to deal with this problem. Right now the problem is that when a product component CX is fixed, it is not sure in which future version of the product A it will be included, so it is for me not possible to say in which version of the product it will be solved, but it is possible to say in which version of the Component CX it has been solved. So when do we need to mark a bug as solved, when the product A version include the fixed version of CX, or only when CX component has been fixed ? Thanks for your personal feedback and ideas about this !

    Read the article

  • Can I prevent an IDENTIFY PACKET DEVICE command to a specific device at boot?

    - by Brian Spisak
    This is related to a previous question related to installation that is now resolved. I'm opening a new question, because I still need to get my DVD drive working. Problem: Failed boot when my ASUS DRW-24B1/ST DVD drive is attached to my asmedia ASM1061. Symptom: ata8.00: exception Emask 0x52 Sact 0x0 SErr 0xffffffff action 0xe frozen ata8: SError: { blah blah } ata8.00: failed command: IDENTIFY PACKET DEVICE ata8.00: cmd blah blah res blah blah (ATA bus error) ata8.00: status: { DRDY } ata8: hard resetting link Background: The ASM1061 is a PCIe to SATA bridge providing 2 x 6Gb/s ports and is supposed to be fully compliant to SATA specs. I just discovered in the fine print of my ASUS P8Z77-V pro motherboard that "These SATA ports are for data hard drivers only. ATAPI devices are not supported." However, I have already installed Windows 7 using this drive and I can run the Ubuntu 12.04 installer from it as well. The only time I have a problem is during Ubuntu boot when it tries an IDENTIFY PACKET DEVICE which seems to be an ATAPI command. I can't simply switch this device to another SATA port because they are already allocated to other devices. (My chipset's 2 x 6Gb/s are connected to my boot SSD and a fast HDD while the 4 x 3Gb/s ports are running a RAID 5 array.) If this can't be fixed or worked around, I suppose I'll have to go buy SATA add-in card. Blech. Thoughts: If indeed this is a device specific issue (that it doesn't support ATAPI discovery) then I can't expect - is it udev? - to work with it. But, it seems that Windows and even the Ubuntu installer work just fine. So why does udev have a problem? At the end of the day, it would be nice to have the DVD working under Ubuntu, but I can live without it. But, as this is a dual-boot machine, I can't physically disconnect it because I want it to work with Windows. (And physically disconnecting it every time I want to boot Ubuntu is NOT an option. ;-) Questions: Should this be considered a bug? My feelings are that if it works with other OS that it should probably work with Ubuntu as well. How can I work around this problem? I have a limited knowledge of linux internals, but it seems I should be able to somehow tell udev (or whatever is doing the discovery) to ignore that device. Is there a way?

    Read the article

  • How to use TFS as a query tracking system?

    - by deostroll
    We already use tfs for managing defects in code etc, etc. We additionally need a way to "understand the domain & requirements of the products". Normally, without tfs we exchange emails with the consultants and have the questions/queries answered. If it is a feature implementation we sometimes "find" conflicts in the implementation itself. And when that happens the userstory is modified and the enhancement/bug as per that is raised in TFS. Sometimes it is critical we come back to decisions we made or questions we wanted answers to. Hence we need to be able to track how that "requirement idea" or that "query in concern" evolved. Hence how is it that we can use TFS to track all of this? Do we raise an "issue" item for this? Or do we raise a "bug" item? The main things we'd ideally look in a query tracking system are as follows: Area: Can be a module, submodule, domain. Sometimes this may be "General" - to address domain related stuff, or, event more granular to address modules, sub-modules. Take the case for the latter, if we were tracking this in excel sheets, we'd just write module1,submodule2; i.e. in a comma separated fashion. The things I would like here is to be able search for all queries relating to submodule2 sometime in the future. Responses: This is a record of conversations between the consultant and any other stakeholder. For a simple case, it would just be paragraphs. Each para would start with a name and date enclosed in brackets and the response following that...each para would be like a thread - much like a forum thread Action taken: We'd want to know how the query was closed, what was the input given, what were the changes that took place because of that, etc etc. These are fields I think I would need in such a system apart from some obvious ones like status, address to, resovled by, etc. I am open for any other fields which are sort of important. To summarise my question: how can we manage "queries" in the system? Where should we ideally store data pertaining to those three fields I have mentioned above (for e.g. is it wise to store responses in the history tag assuming we are opening a bug for the query)?

    Read the article

  • Automate RAC Cluster Upgrades using EM12c

    - by HariSrinivasan
    One of the most arduous processes  in DB maintenance is upgrading Databases across major versions, especially for complex RAC Clusters.With the release of Database Plug-in  (12.1.0.5.0), EM12c Rel 3 (12.1.0.3.0)  now supports automated upgrading of RAC Clusters in addition to Standalone Databases. This automation includes: Upgrade of the complete Cluster across the nodes. ( Example: 11.1.0.7 CRS, ASM, RAC DB  ->   11.2.0.4 or 12.1.0.1 GI, RAC DB)  Best practices in tune with your operations, where you can automate upgrade in steps: Step 1: Upgrade the Clusterware to Grid Infrastructure (Allowing you to wait, test and then move to DBs). Step 2: Upgrade RAC DBs either separately or in group (Mass upgrade of RAC DB's in the cluster). Standard pre-requisite checks like Cluster Verification Utility (CVU) and RAC checks Division of Upgrade process into Non-downtime activities (like laying down the new Oracle Homes (OH), running checks) to Downtime Activities (like Upgrading Clusterware to GI, Upgrading RAC) there by lowering the downtime required. Ability to configure Back up and Restore options as a part of this upgrade process. You can choose to : a. Take Backup via this process (either Guaranteed Restore Point (GRP) or RMAN) b. Set the procedure to pause just before the upgrade step to allow you to take a custom backup c. Ignore backup completely, if there are external mechanisms already in place.  High Level Steps: Select the Procedure "Upgrade Database" from Database Provisioning Home page. Choose the Target Type for upgrade and the Destination version Pick and choose the Cluster, it picks up the complete topology since the clusterware/GI isn't upgraded already Select the Gold Image of the destination version for deploying both the GI and RAC OHs Specify new OH patch, credentials, choose the Restore and Backup options, if required provide additional pre and post scripts Set the Break points in the procedure execution to isolate Downtime activities Submit and track the procedure's execution status.  The animation below captures the steps in the wizard.  For step by step process and to understand the support matrix check this documentation link. Explore the functionality!! In the next blog, will talk about automating rolling Upgrades of Databases in Physical Standby Data Guard environment using Transient Logical Standby.

    Read the article

  • ZFS pool broken after upgrading to 14.04 LTS

    - by cruiserparts
    Well, I have been putting off upgrading to 14.04 for fear that I would break something. Actually for fear that it would break zfs (or I would break it). I am bascially slightly better than novice at linux. Spent the last couple of hours trying to get the pool back. Now I am at the stage where I don't think I have a complete failure, but I am worried that I may break it. So if could help me not break it, and recover it, I would be thankful. My zfs is file storage and not boot. It was working fine for a year and was working perfectly before the upgrade (scrub and everything was fine). I was confident that the upgrade would work (or at least I could fix it) because I had upgraded once in the past, the pool went missing, but I was able to get it back. I have reinstalled zfs, zfs utilities, and some dependencies (after searching this forum) I think what happened is 14.04 deleted some config file, or specified disk names differntly, but I could be wrong. When I set the pool up originally, I was using specific device Ids as I recall (because I did not want to break things if they got reassigned at boot) So see if this helps. I can confirm that old mountpoint folders are there but empty. no talloc stackframe at ../source3/param/loadparm.c:4864, leaking memory pool: naspool1 state: UNAVAIL status: One or more devices could not be used because the label is missing or invalid. There are insufficient replicas for the pool to continue functioning. action: Destroy and re-create the pool from a backup source. see: http://zfsonlinux.org/msg/ZFS-8000-5E scan: none requested config: NAME STATE READ WRITE CKSUM naspool1 UNAVAIL 0 0 0 insufficient replicas raidz1-0 UNAVAIL 0 0 0 insufficient replicas scsi-SATA_WDC_WD1001FALS-_WD-WMATV0990825 UNAVAIL 0 0 0 scsi-SATA_WDC_WD1001FALS-_WD-WMATV2995365 UNAVAIL 0 0 0 scsi-SATA_WDC_WD10EARS-00_WD-WMAV51894349 UNAVAIL 0 0 0 ___@ourserver:~$ sudo zpool import naspool1 cannot import 'naspool1': a pool with that name is already created/imported, and no additional pools with that name were found ___@ourserver:~$ sudo zfs list no datasets available What other output can I post to help? I'm thinking the update deleted some zfs config files. It seems like the pool exists and certainly 3 perfectly working disks did not fail at once. I am worried that I may break something without a little bit of guideance. Thanks.

    Read the article

  • Dealing with "I-am-cool-and-you-are-dumb" manager [closed]

    - by Software Guy
    I have been working with a software company for about 6 months now. I like the projects I work on there and I really like all the people there except for 1 guy. That guy is technically smart, and he is a co-founder of the company. He is an okay guy in person (the kind you wouldn't want to care about much) but things get tricky when he is your manager. In general I am all okay but there are times when I feel I am not being treated fairly: He doesn't give much thought to when he makes mistakes and when I do something similar, he is super critical. Recently he went as far as to say "I am not sure if I can trust you with this feature". The detais of this specific case are this: I was working on this feature, and I was already a couple of hours over my normal working hours, and then I decided to stop and continue tomorrow. We use git, and I like to commit changes locally and only push when I feel they are ready. This manager insists that I push all the changes to the central repo (in case my hard drive crashes). So I push the change, and the ticket is marked as "to be tested". Next day I come in, he sits next to me and starts complaining and says that I posted above. I really didn't know what to say, I tried to explain to him that the ticket is still being worked upon but he didn't seem to listen. He interrupts me in-between when I am coding, which I do not mind, but when I do that same, his face turns like this :| and reacts as if his work was super important and I am just wasting his time. He asks me to accumulate all questions, and then ask him altogether which is not always possible, as you need a clarification before you can continue on a feature implementation. And when I am coding, he talks on the phone with his customers next to me (when he can go to the meeting room with his laptop) and doesn't care. He made me switch to a whole new IDE (from Netbeans to a commercial IDE costing a lot of money) for a really tiny feature (which I later found out was in Netbeans as well!). I didn't make a big deal out of it as I am equally comfortable working with this new IDE, but I couldn't get the science behind his obsession. He said this feature makes sure that if any method is updated by a programmer, the IDE will turn the method name to red in places where it is used. I told him that I do not have a problem since I always search for method usage in the project and make sure its updated. IDEs even have refactoring features for exactly that, but... I recently implemented a feature for a project, and I was happy about it and considering him a senior, I asked him his comments about the implementation quality.. he thought long and hard, made a few funny faces, and when he couldn't find anything, he said "ummm, your program will crash if JS is disabled" - he was wrong, since I had made sure it would work fine with default values even if JS was disabled. I told him that and then he said "oh okay". BUT, the funny thing is, a few days back, he implemented something and I objected with "But that would not run if JS is disabled" and his response was "We don't have to care about people who disable JS" :-/ Once he asked me to investigate if there was a way to modify a CMS generated menu programmatically by extending the CMS, I did my research and told him that the only was is to inject a menu item using JavaScript / jQuery and his reaction was "ah that's ugly, and hacky, not acceptable" and two days later, I see that feature implemented in the same way as I had suggested. The point is, his reaction was not respectful at all, even if what I proposed was hacky, he should be respectful, that I know what's hacky and if I am suggesting something hacky, there must be a reason for it. There are plenty of other reasons / examples where I feel I am not being treated fairly. I want your advice as to what is it that I am doing wrong and how to deal with such a situation. The other guys in the team are actually very good people, and I do not want to leave the job either (although I could, if I want to). All I want is respect and equal treatment. I have thought about talking to this guy in a face to face meeting, but that worries me that his attitude might get worse and make things more difficult for me (since he doesn't seem to be the guy who thinks he can be wrong too). I am also considering talking to the other co-founder but I am not sure how he will take it (as both founders have been friends forever). Thanks for reading the long message, I really appreciate your help.

    Read the article

  • Error installing RVM

    - by Dbugger
    I am following this guide, but this is the output I receive. What am the problem? dbugger@mercury:~$ \curl -sSL https://get.rvm.io | bash -s stable --rails Downloading https://github.com/wayneeseguin/rvm/archive/stable.tar.gz Upgrading the RVM installation in /home/dbugger/.rvm/ RVM PATH line found in /home/dbugger/.profile /home/dbugger/.bashrc /home/dbugger/.zshrc. RVM sourcing line found in /home/dbugger/.bash_profile /home/dbugger/.zlogin. Upgrade of RVM in /home/dbugger/.rvm/ is complete. # Enrique, # # Thank you for using RVM! # We sincerely hope that RVM helps to make your life easier and more enjoyable!!! # # ~Wayne, Michal & team. In case of problems: http://rvm.io/help and https://twitter.com/rvm_io Upgrade Notes: * No new notes to display. rvm 1.25.27 (stable) by Wayne E. Seguin <[email protected]>, Michal Papis <[email protected]> [https://rvm.io/] Searching for binary rubies, this might take some time. No binary rubies available for: ubuntu/14.04/x86_64/ruby-2.1.2. Continuing with compilation. Please read 'rvm help mount' to get more information on binary rubies. Checking requirements for ubuntu. Installing requirements for ubuntu. Updating system.......... Installing required packages: gawk, libreadline6-dev, libssl-dev, libyaml-dev, libsqlite3-dev, sqlite3.... Error running 'requirements_debian_libs_install gawk libreadline6-dev libssl-dev libyaml-dev libsqlite3-dev sqlite3', showing last 15 lines of /home/dbugger/.rvm/log/1401804140_ruby-2.1.2/package_install_gawk_libreadline6-dev_libssl-dev_libyaml-dev_libsqlite3-dev_sqlite3.log ++ /scripts/functions/utility : __rvm_try_sudo() 405 > sudo -p '%p password required for '\''apt-get --no-install-recommends --yes install gawk libreadline6-dev libssl-dev libyaml-dev libsqlite3-dev sqlite3'\'': ' apt-get --no-install-recommends --yes install gawk libreadline6-dev libssl-dev libyaml-dev libsqlite3-dev sqlite3 Reading package lists... Building dependency tree... Reading state information... Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: libssl-dev : Depends: libssl1.0.0 (= 1.0.1f-1ubuntu2) but 1.0.1f-1ubuntu2.1 is to be installed E: Unable to correct problems, you have held broken packages. ++ /scripts/functions/utility : __rvm_try_sudo() 405 > return 100 ++ /scripts/functions/requirements/ubuntu : requirements_debian_libs_install() 36 > return 100 Requirements installation failed with status: 100.

    Read the article

  • BIP 10.1.3.4.x June 2010 Update Available

    - by Tim Dexter
    A new patchset for 10.1.3.4.0 and 10.1.3.4.1 is available on Metalink. some notes: The patch number is 9791839. This patchset includes 28 new bug fixes since the last patchset release on March 31. This is a culmulative update that includes all the fixes and enhancements from previous updates. The patch will supercede the other two updates. Install instructions are in the readme inside the patch There is also a new BIP client patch available, 9821068. No new template building features to my knowledge but there is an update to the template viewer to allow you to test and debug you siny new Excel templates. Server 8529759XMLP_TEMPLATE_DESIGNER CANNOT SAVE / UPLOAD TEMPLATE 8566455 BI PUBLISHER SCHEDULER DOES NOT START WITH JNDI DATA SOURCE 9295667RESPONSE OF GETSCHEDULEDREPORTINFO RETURNS STATUS AS 'UNKNOWN' INSTEAD OF 'SCHED 9542413 UNABLE TO CREATE A NEW TEMPLATE FROM UI 9546137 EXCEL ANALYZER TEMPLATE FAILS FOR A STRUCTURED XML WHEN IT IS UPLOADED 9556338 SIEBEL - BIP PARAMETERS SORT ORDER 9560562 BI PUBLISHER CACHE DIRECTORY FILLING UP AND POINTING TO INVALID DIRECTORY 9646599 USER ROLE DEFINED AS PRIMARYGROUP IN ACTIVEDIRECTORY GROUP ARE NOT RECOGNIZED 9664768 ER: NEED TO BIND USER ATTRIBUTE VALUES DEFINED IN ACTIVEDIRECTORY IN DATA QUERY 9665075 BI PUBLISHER AFTER 9546699 NOTIFICATIONS FOR REPORTS FAIL 9669973 ER: NEED TO SUPPORT PRE-PROCESSING XML WITH XSL FOR EXCEL TEMPLATE 9704401 ER: NEED TO SUPPORT DEFAULT GROUP FOR ALL USERS IN LDAP/AD SECURITY 9711899 SEARCH PARAMETER IS NOT VISIBLE WHEN SCHEDULE A REPORT 9753736 SOME ROLES FROM ACTIVEDIRECTORY ARE NOT LISTED IN ADMIN ROLE-FOLDER MAPPING 9771354 MULTIPLE PARAMETERS IN 10.1.3.4.1 DATA TEMPLATE ACT ACT DIFFERENTLY FROM 10.1.3. 9772982 "REFRESH OTHER PARAMETERS ON CHANGE" DOESN'T WORK PROPERLY Core  8599646 ER:EXTRA SPACE ADDED BELOW IMAGE IN A TABLE CELL OF TEMPLATE IN FIREFOX 9377593 SOME ROWS HEIGHT IN HTML/EXCEL OUTPUT ARE TOO BIG IN BI PUBLISHER 9487030 NAVIGATION TREE REPEATING TWICE IN PDF DCCUMENT CREATED BY BI PUBLISHER 9509432 PERFORMANCE ISSUE WHEN USING PDF TEMPLATE 9534424 PS: DOCUMENT-REPEAT-FULLPATH-ELEMENTNAME SHOULDNT USE DOT "." AS PATH SEPARATOR 9553360 FORMPROCESSOR CANNOT PARSE SOME PDF TEMPLATES 9554959 TEXT IN AUTOSHAPE IS NOT PROPERLY CUT OFF FOR LINE WRAPPING 9569417 AFTER APPLYING PATCH 9509432 PDF TEMPLATES WITH DBDRV PRODUCE NO OUTPUT 9571670 ER: EXCEL TEMPLATE TO SUPPORT XSLT LOGIC AND XSL CUSTOM EXTENTIONS 9589809 XSL:CALL-TEMPLATE IS MISSING IN GENERATED XSL FILE 9605920 BOOKMARK TESTCASE FAILED DUE TO ER9283933 9689634 PRINT FLOW CHART USING ACROSS 3 DOWN 0 GIVES EXTRA BLANK PAGES You might have noticed some fixes and ehancements to the Excel templates so I can get back on those now. There is a part two to the Mapviewer BIP Mashup coming ... just need aanother 4 hours in the day to squeeze it in.

    Read the article

  • Partners, Start your Engines

    - by Kristin Rose
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Hello speed racer - OPN here to inform you that in case you missed it, our ISV Focused PartnerCast took place last week in Oracle’s Redwood Shores studio. Without a roadblock in site, Oracle’s key drivers discussed topics like Optimized and Oracle’s Cloud offerings; even OPN partner MSC took it to the next level by stopping by to share with us their first hand experience with the Oracle Exastack Optimized program. By stepping into Oracle’s ”Motor Speedway”, better known as the Exastack lab, MSC was able to fine tune, test and optimize their application on Oracle Exadata, as well as gain outstanding expertise on several technical areas such as optimizing multithreaded applications and database tuning. By optimizing their solution, MSC has “decreased their deployment time and saw a 30 percent performance improvement in database.” Sounds like someone’s gearing up for an “Oracle Indy 500.” By achieving Oracle Exadata Optimized status, MSC is putting performance in the driver’s seat, and their customers at the front of the race by delivering a solution that is tuned for performance, scalability and reliability. So go ahead and let the Oracle Exastack Optimized pit crew take you to the finish line. Learn how to go from 0-60 by watching MSC’s segment of the ISV Focused PartnerCast below. Ready… Set… Optimize! The OPN Communications Team

    Read the article

  • Live CD has black screen HP DV6

    - by Shaun Killingbeck
    Attempting to install/try ubuntu (11.10, 12.04) on my new laptop, using a liveCD (and tried USB). I get the purple screen (with the man/keyboard at the bottom) and after that the screen flashes bright white before going black. Ubuntu continues to load in the background, with login sound etc but the screen is off. I have tried as many different solutions as I could find including: using nomodestep, xforcevesa, i915.modeset=0, and also now i915.modeset=1 in boot options (seperately): varying consequences, but either I end up at a blinking cursor with no prompt, a command line (startx fails: no screen found), or the original blank screen again Tried booting from VirtualBox - it crashes at the same place the screen would go blank when using a CD/USB tried 11.04: I don't have this problem BUT when trying to install, I get a ubi-partman error 141 (possibly down to the three partitions that came on my laptop... not sure why HP needed there own separate partition for HP Tools...) Model: HP Pavillion DV6 6B08SA Processor: AMD Quad-Core A6-3410MX APU with Radeon HD 6545G2 Dual Graphics (1.6 GHZ 4 MB L2 cache ) Chipset: AMD RS880M Any help would be greatly appreciated. I just want to be able to partition the drive and install Ubuntu. I'm assuming the issue is graphics card related, although I have no confirmation of that. Update: Tried the ?orkarounds on https://wiki.ubuntu.com/X/Troubleshooting/BlankScreen - set gfxpayload=text changed nothing, removing splash did nothing and setting vesafb.nonsense=1 did nothing either. I'd like to be able to collect some log information somehow, but I can't get to a command line from the liveCD. tried using the latest 12.04 beta, same issue tried nomodeset without splash or quiet. get the following (tail of) output before it freezes on that screen: * Starting configure network device security [OK] * Starting configure network device [OK] [ 25.720899] ieee80211 phy0: w1_ops_config: change monitor mode: false (implement) [ 25.720923] ieee80211 phy0: w1_ops_config: change power-save mode: false (implement) * Starting restore sound card(s') mixer state(s) [fail] [ 25.721849] ieee80211 phy0: w1_ops_bss_info_changed: qos enabled: false (implement) * Stopping save kernel messages [OK] * Starting bluetooth [OK] * PulseAudio configured for per-user sessions saned disabled; edit /etc/default/saned [ 25.988016] hci_cmd_timer: hci0 command tx timeout [ 26.207225] bad LUN (0:1) [ 26.223735] bad target number (1:0) [ 26.252111] bad target number (2:0) [ 26.272170] bad target number (3:0) [ 26.300154] bad target number (4:0) [ 26.328162] bad target number (5:0) [ 26.344180] bad target number (6:0) [ 26.368142] bad target number (7:0) * Checking battery state... [OK] * Stopping System V runlevel capability [OK] Does this give any indication of the problem? the false (implement) messages also reappear when I press the power button to ask it to shutdown, followed by a [fail] status for killing remaining processes.

    Read the article

  • Parallel MSBuild FTW - Build faster in parallel

    - by deadlydog
    Hey everyone, I just discovered this great post yesterday that shows how to have msbuild build projects in parallel Basically all you need to do is pass the switches “/m:[NumOfCPUsToUse] /p:BuildInParallel=true” into MSBuild. Example to use 4 cores/processes (If you just pass in “/m” it will use all CPU cores): MSBuild /m:4 /p:BuildInParallel=true "C:\dev\Client.sln" Obviously this trick will only be useful on PCs with multi-core CPUs (which we should all have by now) and solutions with multiple projects; So there’s no point using it for solutions that only contain one project.  Also, testing shows that using multiple processes does not speed up Team Foundation Database deployments either in case you’re curious Also, I found that if I didn’t explicitly use “/p:BuildInParallel=true” I would get many build errors (even though the MSDN documentation says that it is true by default). The poster boasts compile time improvements up to 59%, but the performance boost you see will vary depending on the solution and its project dependencies.  I tested with building a solution at my office, and here are my results (runs are in seconds): # of Processes 1st Run 2nd Run 3rd Run Avg Performance 1 192 195 200 195.67 100% 2 155 156 156 155.67 79.56% 4 146 149 146 147.00 75.13% 8 136 136 138 136.67 69.85%   So I updated all of our build scripts to build using 2 cores (~20% speed boost), since that gives us the biggest bang for our buck on our solution without bogging down a machine, and developers may sometimes compile more than 1 solution at a time.  I’ve put the any-PC-safe batch script code at the bottom of this post. The poster also has a follow-up post showing how to add a button and keyboard shortcut to the Visual Studio IDE to have VS build in parallel as well (so you don’t have to use a build script); if you do this make sure you use the .Net 4.0 MSBuild, not the 3.5 one that he shows in the screenshot.  While this did work for me, I found it left an MSBuild.exe process always hanging around afterwards for some reason, so watch out (batch file doesn’t have this problem though).  Also, you do get build output, but it may not be the same that you’re used to, and it doesn’t say “Build succeeded” in the status bar when completed, so I chose to not make this my default Visual Studio build option, but you may still want to. Happy building! ------------------------------------------------------------------------------------- :: Calculate how many Processes to use to do the build. SET NumberOfProcessesToUseForBuild=1  SET BuildInParallel=false if %NUMBER_OF_PROCESSORS% GTR 2 (                 SET NumberOfProcessesToUseForBuild=2                 SET BuildInParallel=true ) MSBuild /maxcpucount:%NumberOfProcessesToUseForBuild% /p:BuildInParallel=%BuildInParallel% "C:\dev\Client.sln"

    Read the article

  • Customer Experience and BPM – From Efficiency to Engagement

    - by Ajay Khanna
    Over the last few years, focus of BPM has been mainly to improve the businesses efficiency. To create more efficient processes, to remove bottlenecks, to automate processes. That still holds true and why not? Isn’t BPM all about continuous improvement? BPM facilitates and requires business and IT collaboration. But business also requires working with customer. Do we not want to get close to and collaborate with our customers? This is where Social BPM takes BPM a step further. It not only allows people within an organization to collaborate to design exceptional processes, not only lets them collaborate on resolving a case but also let them engage with the customers. Engaging with customer means, first of all, connecting with them on their terms and turf. Take a new account opening process. Can a customer call you and initiate the process? Can a customer email you, or go to the website and initiate the process? Can they tweet you and initiate the process? Can they check the status of process via any channel they like? Can they take a picture of damaged package delivery and kick-off a returns process from their mobile device, with GIS data? Yes, these are various aspects to consider during process design if the goal is better customer experience and engagement. Of course, we want to be efficient and agile, but the focus here needs to be the customer. Now when the customer is tweeting about your products, posting on Facebook and Yelp about their experience with your company (and your process), you need to seek out that information. You need to gather and analyze the customer’s feedback on the social media and use that information to improve the processes and products. This is an excellent source of product and process ideation. So BPM is no longer only about improving back-office process efficiency, it is moving into a new and exciting phase of improving frontline customer facing processes, customer experience and engagement. Let me know how you think BPM can enhance customer experience.

    Read the article

  • OpenGLES GLSL Shader attributes always bound to 0

    - by codemonkey
    So I have a very simple vertex shader as follows #version 120 attribute vec3 position; attribute vec3 inColor; uniform mat4 mvp; varying vec3 fragColor; void main(void){ fragColor = inColor; gl_Position = mvp * vec4(position, 1.0); } Which I load, as well as the fragment shader: #version 120 varying vec3 fragColor; void main(void) { gl_FragColor = vec4(fragColor,1.0); } Which I then load, compile, and link to my shader program. I check for link status using glGetProgramiv(shaderProgram, GL_LINK_STATUS, &shaderSuccess); which returns GL_TRUE so I think its ok. However, when I query the active attributes and uniforms using #ifdef DEBUG int totalAttributes = -1; glGetProgramiv(shaderProgram, GL_ACTIVE_ATTRIBUTES, &totalAttributes); for(int i=0; i<totalAttributes; ++i) { int name_len=-1, num=-1; GLenum type = GL_ZERO; char name[100]; glGetActiveAttrib(shaderProgram, GLuint(i), sizeof(name)-1, &name_len, &num, &type, name ); name[name_len] = 0; GLuint location = glGetAttribLocation(shaderProgram, name); fprintf(stderr, "Attribute %s is bound at %d\n", name, location); } int totalUniforms = -1; glGetProgramiv(shaderProgram, GL_ACTIVE_UNIFORMS, &totalUniforms); for(int i=0; i<totalUniforms; ++i) { int name_len=-1, num=-1; GLenum type = GL_ZERO; char name[100]; glGetActiveUniform(shaderProgram, GLuint(i), sizeof(name)-1, &name_len, &num, &type, name ); name[name_len] = 0; GLuint location = glGetUniformLocation(shaderProgram, name); fprintf(stderr, "Uniform %s is bound at %d\n", name, location); } #endif I get: Attribute inColor is bound at 0 Attribute position is bound at 1 Uniform mvp is bound at 0 Which leads to failure when trying to use the shader to render the objects. I have tried switching the order of declaration of position & inColor, but still, only position is bound with the other two giving 0 Can someone please explain why this is happening? Thanks

    Read the article

  • How to switch off wifi on startup or from the console

    - by mit
    I have installed ubuntu 10.04 on a laptop. Wifi is switched on by default on startup. I can disable it rightclicking the network manager icon in the gnome bar. How can I set it to have wifi switched off as default? Alternatively, how can I switch off wifi on the console? I tried already the rfkill command but it does not list any devices and it does not switch off wifi, I tried different parameters. This is a standard install of the Ubuntu 10.04 i386 Desktop Live CD on an IBM T40 Laptop. EDIT A: This is the output of some rfkill commands on my system, and it does not affect the wifi of the laptop: $ rfkill --help Usage: rfkill [options] command Options: --version show version (0.4) Commands: help event list [IDENTIFIER] block IDENTIFIER unblock IDENTIFIER where IDENTIFIER is the index no. of an rfkill switch or one of: <idx> all wifi wlan bluetooth uwb ultrawideband wimax wwan gps fm $ rfkill list $ rfkill list wifi $ rfkill list all $ rfkill list wlan $ sudo rfkill list all $ sudo rfkill block all $ sudo rfkill block wlan $ sudo rfkill block wifi $ EDIT B: Now I found out that sudo ifconfig eth1 down turns it off. And I can turn it on through the gnome network applet again. But the applet does not reflect the change from the commandline, it stills believes wifi is switched on. I have to switch it off and on again on the applet to switch it on again, when I switched it off from the console. Is there a better way? This is what the syslog looks like when I switch wireless off and on again from the network manager: NetworkManager: <info> (eth1): device state change: 3 -> 2 (reason 0) NetworkManager: <info> (eth1): deactivating device (reason: 0). NetworkManager: <info> Policy set '24' (eth0) as default for routing and DNS. NetworkManager: <info> (eth1): taking down device. avahi-daemon[660]: Withdrawing address record for fe80::202:8aff:feba:d798 on eth1. kernel: [ 971.472116] airo(eth1): cmd:3 status:7f03 rsp0:0 rsp1:0 rsp2:0 NetworkManager: <info> (eth1): bringing up device. NetworkManager: <info> (eth1): supplicant interface state: starting -> ready NetworkManager: <info> (eth1): device state change: 2 -> 3 (reason 42) avahi-daemon[660]: Registering new address record for fe80::202:8aff:feba:d798 on eth1.*. kernel: [ 965.512048] eth1: no IPv6 routers present

    Read the article

< Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >