Search Results

Search found 8550 results on 342 pages for 'datetime operation'.

Page 252/342 | < Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >

  • Some Oracle VM 3 updates

    - by wcoekaer
    Today we did another patch set update for Oracle VM 3 (3.0.3-build 227). This can be downloaded from My Oracle Support as patch ID 14736185. There are quite a few updates in here and I highly recommend any Oracle VM 3 customer or user to install this update. This patch can be installed on top of Oracle VM 3.0 versions 3.0.2 and 3.0.3. The patch is cumulative for 3.0.3. So if you already installed patch update 1 (3.0.3-150) then this will just be incremental on top of that and brings you to 3.0.3-build 227. There is a readme file which contains the patchlist in the patch info. The following patches are released on ULN for Oracle VM server 3.0 : initscripts-8.45.30-2.100.18.el5.x86_64 The inittab file and the /etc/init.d scripts. kernel-ovs-2.6.32.21-45.6.x86_64 The Linux kernel kernel-ovs-firmware-2.6.32.21-45.6.x86_64 Firmware files used by the Linux kernel osc-oracle-ocfs2-0.1.0-35.el5.noarch Oracle Storage Connect ocfs2 Plugin osc-plugin-manager-1.2.8-9.el5.3.noarch Oracle Storage Connect Plugin Infrastructure osc-plugin-manager-devel-1.2.8-9.el5.3.noarch Oracle Storage Connect Plugin Development ovs-agent-3.0.3-41.6.x86_64 Agent for Oracle VM xen-4.0.0-81.el5.1.x86_64 Xen is a virtual machine monitor xen-devel-4.0.0-81.el5.1.x86_64 Development libraries for Xen tools xen-tools-4.0.0-81.el5.1.x86_64 Various tooling for the manipulation of Xen instances Errata emails will be sent in the next few days with details on the above updates. Or you will find them here. I also did an update of my Oracle VM utilities to 0.4.0. They are also available from My Oracle Support, patch ID 14736239. These utils can be unzipped and installed on the server running Oracle VM Manager. Typically in /u01/app/oracle/ovm-manager-3/ovm_utils. There is a set of man pages in /u01/app/oracle/ovm-manager-3/ovm_utils/man/man8. There now are 6 commands : ovm_vmcontrol : VM level operations ovm_servercontrol : server level operations ovm_vmdisks : virtual disk/physical location mapping for VM disks ovm_vmmessage : message passing utility between the manager and the VM tools (in the Oracle VM templates) ovm_repocontrol : repository level operations ovm_poolcontrol : pool level operations Some of the new changes : at a pool level, acknowledge events and cascade to servers and virtual machines with outstanding events at a pool level, do a rescan of the storage for fibrechannel/iscsi disks if you add new devices (it does this operation then on every running server) at a repository level, fixup a device if it had a failed create repository at a repository level, refresh the repository and this will update the free space in the UI for ocfs2 repositories at a server level, acknowledge server events and cascade to virtual machines if needed at a VM level, acknowledge VM events at a VM level, bind vcpus to cores with vcpuset/vcpuget Please see the man pages and remember that these tools are just written As Is - no SRs... (per the documentation) Hopefully they are useful.

    Read the article

  • Updating an Entity through a Service

    - by GeorgeK
    I'm separating my software into three main layers (maybe tiers would be a better term): Presentation ('Views') Business logic ('Services' and 'Repositories') Data access ('Entities' (e.g. ActiveRecords)) What do I have now? In Presentation, I use read-only access to Entities, returned from Repositories or Services, to display data. $banks = $banksRegistryService->getBanksRepository()->getBanksByCity( $city ); $banksViewModel = new PaginatedList( $banks ); // some way to display banks; // example, not real code I find this approach quite efficient in terms of performance and code maintanability and still safe as long as all write operations (create, update, delete) are preformed through a Service: namespace Service\BankRegistry; use Service\AbstractDatabaseService; use Service\IBankRegistryService; use Model\BankRegistry\Bank; class Service extends AbstractDatabaseService implements IBankRegistryService { /** * Registers a new Bank * * @param string $name Bank's name * @param string $bik Bank's Identification Code * @param string $correspondent_account Bank's correspondent account * * @return Bank */ public function registerBank( $name, $bik, $correspondent_account ) { $bank = new Bank(); $bank -> setName( $name ) -> setBik( $bik ) -> setCorrespondentAccount( $correspondent_account ); if( null === $this->getBanksRepository()->getDefaultBank() ) $this->setDefaultBank( $bank ); $this->getEntityManager()->persist( $bank ); return $bank; } /** * Makes the $bank system's default bank * * @param Bank $bank * @return IBankRegistryService */ public function setDefaultBank( Bank $bank ) { $default_bank = $this->getBanksRepository()->getDefaultBank(); if( null !== $default_bank ) $default_bank->setDefault( false ); $bank->setDefault( true ); return $this; } } Where am I stuck? I'm struggling about how to update certain fields in Bank Entity. Bad solution #1: Making a series of setters in Service for each setter in Bank; - seems to be quite reduntant, increases Service interface complexity and proportionally decreases it's simplicity - something to avoid if you care about code maitainability. I try to follow KISS and DRY principles. Bad solution #2: Modifying Bank directly through it's native setters; - really bad. If you'll ever need to move modification into the Service, it will be pain. Business logic should remain in Business logic layer. Plus, there are plans on logging all of the actions and maybe even involve user permissions (perhaps, through decorators) in future, so all modifications should be made only through the Service. Possible good solution: Creating an updateBank( Bank $bank, $array_of_fields_to_update) method; - makes the interface as simple as possible, but there is a problem: one should not try to manually set isDefault flag on a Bank, this operation should be performed through setDefaultBank method. It gets even worse when you have relations that you don't want to be directly modified. Of course, you can just limit the fields that can be modified by this method, but how do you tell method's user what they can and cannot modify? Exceptions?

    Read the article

  • How do you formulate the Domain Model in Domain Driven Design properly (Bounded Contexts, Domains)?

    - by lko
    Say you have a few applications which deal with a few different Core Domains. The examples are made up and it's hard to put a real example with meaningful data together (concisely). In Domain Driven Design (DDD) when you start looking at Bounded Contexts and Domains/Sub Domains, it says that a Bounded Context is a "phase" in a lifecycle. An example of Context here would be within an ecommerce system. Although you could model this as a single system, it would also warrant splitting into separate Contexts. Each of these areas within the application have their own Ubiquitous Language, their own Model, and a way to talk to other Bounded Contexts to obtain the information they need. The Core, Sub, and Generic Domains are the area of expertise and can be numerous in complex applications. Say there is a long process dealing with an Entity for example a Book in a core domain. Now looking at the Bounded Contexts there can be a number of phases in the books life-cycle. Say outline, creation, correction, publish, sale phases. Now imagine a second core domain, perhaps a store domain. The publisher has its own branch of stores to sell books. The store can have a number of Bounded Contexts (life-cycle phases) for example a "Stock" or "Inventory" context. In the first domain there is probably a Book database table with basically just an ID to track the different book Entities in the different life-cycles. Now suppose you have 10+ supporting domains e.g. Users, Catalogs, Inventory, .. (hard to think of relevant examples). For example a DomainModel for the Book Outline phase, the Creation phase, Correction phase, Publish phase, Sale phase. Then for the Store core domain it probably has a number of life-cycle phases. public class BookId : Entity { public long Id { get; set; } } In the creation phase (Bounded Context) the book could be a simple class. public class Book : BookId { public string Title { get; set; } public List<string> Chapters { get; set; } //... } Whereas in the publish phase (Bounded Context) it would have all the text, release date etc. public class Book : BookId { public DateTime ReleaseDate { get; set; } //... } The immediate benefit I can see in separating by "life-cycle phase" is that it's a great way to separate business logic so there aren't mammoth all-encompassing Entities nor Domain Services. A problem I have is figuring out how to concretely define the rules to the physical layout of the Domain Model. A. Does the Domain Model get "modeled" so there are as many bounded contexts (separate projects etc.) as there are life-cycle phases across the core domains in a complex application? Edit: Answer to A. Yes, according to the answer by Alexey Zimarev there should be an entire "Domain" for each bounded context. B. Is the Domain Model typically arranged by Bounded Contexts (or Domains, or both)? Edit: Answer to B. Each Bounded Context should have its own complete "Domain" (Service/Entities/VO's/Repositories) C. Does it mean there can easily be 10's of "segregated" Domain Models and multiple projects can use it (the Entities/Value Objects)? Edit: Answer to C. There is a complete "Domain" for each Bounded Context and the Domain Model (Entity/VO layer/project) isn't "used" by the other Bounded Contexts directly, only via chosen paths (i.e. via Domain Events). The part that I am trying to figure out is how the Domain Model is actually implemented once you start to figure out your Bounded Contexts and Core/Sub Domains, particularly in complex applications. The goal is to establish the definitions which can help to separate Entities between the Bounded Contexts and Domains.

    Read the article

  • PowerShell – Show a Notification Balloon

    - by BuckWoody
    In my presentations for PowerShell I sometimes want to start a process (like a backup) that will take some time. I normally pop up a notification “balloon” at the start, and then do the bulk of the work, and then pop up a balloon at the end to let me know it’s done. You can actually try out this little sample (on a test system, of course) without any other code to see what it does. Then just put the other PowerShell commands in the #Do Some Work part. Oh – throw an icon (.ico file) in a c:\temp directory or point that somewhere else. (No, this probably isn’t original. Can’t remember where I saw the original code, but I’ve modified it a bit anyway, so if you’re the original author and this looks slightly familiar, post a comment.) [void] [System.Reflection.Assembly]::LoadWithPartialName("System.Windows.Forms") $objBalloon = New-Object System.Windows.Forms.NotifyIcon $objBalloon.Icon = "C:\temp\Folder.ico" # You can use the value Info, Warning, Error $objBalloon.BalloonTipIcon = "Info" # Put what you want to say here for the Start of the process $objBalloon.BalloonTipTitle = "Begin Title" $objBalloon.BalloonTipText = "Begin Message" $objBalloon.Visible = $True $objBalloon.ShowBalloonTip(10000) # Do some work # Put what you want to say here for the completion of the process $objBalloon.BalloonTipTitle = "End Title" $objBalloon.BalloonTipText = "End Message" $objBalloon.Visible = $True $objBalloon.ShowBalloonTip(10000) Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. Yes, there are always multiple ways to do things, and this script may not work in every situation, for everything. It’s just a script, people. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Samba fails to install

    - by jschoen
    I am running XBMC, which is built around Ubuntu 10.04. It does not come with samba pre-installed, and I need to share some media with a couple other boxes. I followed the Think Geek directions found here. I had it all set up a couple days ago, and thought I was in the clear. I rebooted this evening and when it came back up Samba was not started. I determined this by trying access the samba shares, and it would return there was an connecting to the server. I can ssh into it, so I know it is connected. In my inifinite wisdom, I figured I just messed something up and would just uninstall and reinstall. So I did: sudo apt-get purge samba and sudo apt-get purge smbfs. Then tried to follow the tutorial above again. The what I get after running sudo apt-get install samba smbfs is Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: openbsd-inetd inet-superserver smbldap-tools ldb-tools ufw smbclient The following NEW packages will be installed: samba smbfs 0 upgraded, 2 newly installed, 0 to remove and 5 not upgraded. Need to get 0B/8,131kB of archives. After this operation, 22.6MB of additional disk space will be used. Preconfiguring packages ... Selecting previously deselected package samba. (Reading database ... 57098 files and directories currently installed.) Unpacking samba (from .../samba_2%3a3.4.7~dfsg-1ubuntu3.2_i386.deb)... Selecting previously deselected package smbfs. Unpacking smbfs (from .../smbfs_2%3a3.4.7~dfsg-1ubuntu3.2_i386.deb) ... Processing triggers for ureadahead ... Setting up samba (2:3.4.7~dfsg-1ubuntu3.2) ... Generating /etc/default/samba... update-alternatives: using /usr/bin/smbstatus.samba3 to provide /usr/bin/smbstatus (smbstatus) in auto mode. smbd start/running, process 2963 **start: Job failed to start** Setting up smbfs (2:3.4.7~dfsg-1ubuntu3.2) ... The bold is my own emphasis. So I am not sure what I messed up here, or how to get back to where it was. Though I am pretty sure I made it worse than it is. I found where the logs are located, /var/logs, and found this line that seems to be the culprit. Jan 29 11:59:34 XBMCLive smbd[2806]: error opening config file So it seems to not create the configuration files. Is there a way to get samba to try to recreate them again?

    Read the article

  • SQL SERVER – Backup SQL databases to Box or SkyDrive

    - by Pinal Dave
    To ensure your SQL Server or Azure databases remain safe, you should backup your databases periodically. And it is important to store the backups in a reliable location. Microsoft SkyDrive currently offers 7GB free, Box offers 5GB free – both are reliable and it is simple to send your backups there. SQLBackupAndFTP in it’s latest version 9 added the option to backup to SkyDrive and Box ( in addition to local/network folder, NAS drive, FTP, Dropbox, Google Drive and Amazon S3). Just select the databases that you’d like to backup and select to store the backups in SkyDrive or Box. Below I will show you how to do it in details Select databases to backup First connect to your SQL Server or Azure Sql Database. Then select the databases you’d like to backup. Connect to SkyDrive or Box cloud If you have a free version of SQLBackupAndFTP Box destination is included, but SkyDrive destination will be disabled as it is available in the Standard version or above. Click “Try now” to get 30 days trial on all options On the “SkyDrive Settings” form you’ll need to authorize SQLBackupAndFTP to access your SkyDrive. Click “Authorize…” to open SkyDrive authorization page in your browser, sign in your to SkyDrive account and click at “Allow” . On the next page you will see the field with authorization code. Copy it to the clipboard. Box operation is just the same. After that return to SQLBackupAndFTP, paste the authorization code and click “OK” . After you are authorized, you can enter the path to a backup folder. SQLBackupAndFTP will create the folder if it does not exist. That’s all what has to be done to backup to SkyDrive or Box cloud.  You can now click on “Run Now” button to test this job. Conclusion Whatever is your preference for storing SQL backups, it is easy with SQLBackupAndFTP. Note that at the time of this writing they are running a very rare promotion on volume licenses: 5–9 licenses: 20% off 10–19 licenses: 35% off more than 20 licenses: 50% off Please let me know your favorite options for storing the backups. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Error while removing the new kernel 2.6.37

    - by Tarek
    Hi! I tried to install the new kernel but something went wrong and I'm trying to remove it now. The error massege is: mhd@Tarek-Laptop:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: linux-image-2.6.37-020637-generic 0 upgraded, 0 newly installed, 1 to remove and 9 not upgraded. 1 not fully installed or removed. After this operation, 111MB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 188780 files and directories currently installed.) Removing linux-image-2.6.37-020637-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 2.6.37-020637-generic /boot/vmlinuz-2.6.37-020637-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 2.6.37-020637-generic /boot/vmlinuz-2.6.37-020637-generic /etc/default/grub: 33: Syntax error: EOF in backquote substitution run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 2 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-2.6.37-020637-generic.postrm line 328. dpkg: error processing linux-image-2.6.37-020637-generic (--remove): subprocess installed post-removal script returned error exit status 1 Errors were encountered while processing: linux-image-2.6.37-020637-generic E: Sub-process /usr/bin/dpkg returned an error code (1) The previous unsloved error is on this bug. This is my grub configuration file: # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. GRUB_DEFAULT=0 #GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` RUB_CMDLINE_LINUX_DEFAULT="quiet splash nomodeset video=uvesafb:mode_option=1024x768-24,mtrr=3,scroll=ywrap" video=uvesafb:mode_option=>>1024x768-24<<,mtrr=3,scroll=ywrap" GRUB_CMDLINE_LINUX=" vga=792 splash" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' GRUB_GFXMODE=1024x768-24 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_LINUX_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" thank you for answering.

    Read the article

  • Java Embedded Releases

    - by Tori Wieldt
    Oracle today announced a new product in its Java Platform, Micro Edition (Java ME) product portfolio, Oracle Java ME Embedded 3.2, a complete client Java runtime Optimized for resource-constrained, connected, embedded systems.  Also, Oracle is releasing Oracle Java Wireless Client 3.2, Oracle Java ME Software Development Kit (SDK) 3.2. Oracle also announced Oracle Java Embedded Suite 7.0 for larger embedded devices, providing a middleware stack for embedded systems. Small is the new big! Introducing Oracle Java ME Embedded 3.2  Oracle Java ME Embedded 3.2 is designed and optimized to meet the unique requirements of small embedded, low power devices such as micro-controllers and other resource-constrained hardware without screens or user interfaces. These include: On-the-fly application downloads and updates Remote operation, often in challenging environments Ability to add new capabilities without impacting the existing functions Support for hardware with as little as 130 kB RAM and 350 kB ROM Oracle Java Wireless Client 3.2 Oracle Java Wireless Client 3.2 is built around an optimized Java ME implementation that delivers a feature-rich application environment for mass-market mobile devices. This new release: Leverages standard JSRs, Oracle optimizations/APIs and a flexible porting layer for device specific customizations, which are tuned to device/chipset requirements Supports advanced tooling functions, such as memory and network monitoring and on-device tooling Offers new support for dual SIM functionality, which is highly useful for mass-market devices supported by multiple carriers with multiple phone connections Oracle Java ME SDK 3.2 Oracle Java ME SDK 3.2 provides a complete development environment for both Oracle Java ME Embedded 3.2 and Oracle Java Wireless Client 3.2. Available for download from OTN, The latest version includes: Small embedded device support In-field and remote administration and debugging Java ME SDK plug-ins for Eclipse and the NetBeans Integrated Development Environment (IDE), enabling more application development environments for Java ME developers. A new device skin creator that developers can use to generate their custom device skins for testing their applications. Oracle Embedded Suite 7.0 The Oracle Java Embedded Suite is a new packaged solution from Oracle (including Java DB, GlassFish for Embedded Suite, Jersey Web Services Framework, and Oracle Java SE Embedded 7 platform), created to provide value added services for collecting, managing, and transmitting data to and from other embedded devices.The Oracle Java Embedded Suite is a complete device-to-data center solution subset for embedded systems.  See Java Me and Java Embedded in Action Java ME and Java Embedded technologies will be showcased for developers at JavaOne 2012 in over 60 conference sessions and BOFs, as well as in the JavaOne Exhibition Hall. For business decision makers, the new event Java Embedded @ JavaOne you learn more about Java Embedded technologies and solutions.

    Read the article

  • Install 32-bit gstreamer plugins on 64-bit

    - by Rua
    I am trying to install the 32-bit gstreamer plugins on my 64-bit system (Ubuntu 12.10 based). I can install the packages gstreamer0.10-plugins-base:i386 and gstreamer0.10-plugins-good:i386. However, gstreamer0.10-plugins-bad:i386, gstreamer0.10-plugins-bad-multiverse:i386 and gstreamer0.10-plugins-ugly:i386 conflict with 64-bit packages already installed on my system: $ sudo apt-get install gstreamer0.10-plugins-bad:i386 Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: gstreamer0.10-plugins-bad:i386 : Depends: libass4:i386 (>= 0.9.7) but it is not going to be installed Depends: libdca0:i386 but it is not going to be installed Depends: libdvdnav4:i386 (>= 4.2.0+20120524) but it is not going to be installed Depends: libdvdread4:i386 but it is not going to be installed Depends: libslv2-9:i386 (>= 0.6.4-1~) but it is not going to be installed E: Unable to correct problems, you have held broken packages. ... $ sudo apt-get install gstreamer0.10-plugins-bad-multiverse:i386 Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: libavcodec53:i386 libavutil51:i386 libfaac0:i386 libfaad2:i386 libgsm1:i386 libmjpegtools-1.9:i386 libmp3lame0:i386 libquicktime2:i386 libschroedinger-1.0-0:i386 libswscale2:i386 libva1:i386 libvpx1:i386 libx264-123:i386 libxvidcore4:i386 The following packages will be REMOVED: gstreamer0.10-plugins-bad-multiverse libfaac0 libmjpegtools-1.9 mint-meta-codecs The following NEW packages will be installed: gstreamer0.10-plugins-bad-multiverse:i386 libavcodec53:i386 libavutil51:i386 libfaac0:i386 libfaad2:i386 libgsm1:i386 libmjpegtools-1.9:i386 libmp3lame0:i386 libquicktime2:i386 libschroedinger-1.0-0:i386 libswscale2:i386 libva1:i386 libvpx1:i386 libx264-123:i386 libxvidcore4:i386 0 upgraded, 15 newly installed, 4 to remove and 5 not upgraded. Need to get 9,198 kB of archives. After this operation, 23.3 MB of additional disk space will be used. Do you want to continue [Y/n]? ... $ sudo apt-get install gstreamer0.10-plugins-ugly:i386 Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: gstreamer0.10-plugins-ugly:i386 : Depends: libdvdread4:i386 but it is not going to be installed E: Unable to correct problems, you have held broken packages. This means that I can't play mp3s (among other things) in 32-bit applications that use gstreamer. Is there a way around this?

    Read the article

  • can't update nvida having error near the end of the install

    - by user94843
    I had just got Ubuntu (first timer to Ubuntu so be very descriptive). I think there a problem with my Nvida update it won't let me update it. This is the name of the update in update manager NVIDIA binary xorg driver, kernel module and VDPAU library. When i attempt to install it, it starts out fine but near the end i get a window titaled package operation failed with these under the details installArchives() failed: Setting up nvidia-current (295.40-0ubuntu1) ... update-initramfs: deferring update (trigger activated) INFO:Enable nvidia-current DEBUG:Parsing /usr/share/nvidia-common/quirks/put_your_quirks_here DEBUG:Parsing /usr/share/nvidia-common/quirks/dell_latitude DEBUG:Parsing /usr/share/nvidia-common/quirks/lenovo_thinkpad DEBUG:Processing quirk Latitude E6530 DEBUG:Failure to match Gigabyte Technology Co., Ltd. with Dell Inc. DEBUG:Quirk doesn't match DEBUG:Processing quirk ThinkPad T420s DEBUG:Failure to match Gigabyte Technology Co., Ltd. with LENOVO DEBUG:Quirk doesn't match Removing old nvidia-current-295.40 DKMS files... Loading new nvidia-current-295.40 DKMS files... Error! DKMS tree already contains: nvidia-current-295.40 You cannot add the same module/version combo more than once. dpkg: error processing nvidia-current (--configure): subprocess installed post-installation script returned error exit status 3 Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-31-generic Warning: No support for locale: en_US.utf8 Errors were encountered while processing: nvidia-current Error in function: Setting up nvidia-current (295.40-0ubuntu1) ... update-initramfs: deferring update (trigger activated) INFO:Enable nvidia-current DEBUG:Parsing /usr/share/nvidia-common/quirks/put_your_quirks_here DEBUG:Parsing /usr/share/nvidia-common/quirks/dell_latitude DEBUG:Parsing /usr/share/nvidia-common/quirks/lenovo_thinkpad DEBUG:Processing quirk Latitude E6530 DEBUG:Failure to match Gigabyte Technology Co., Ltd. with Dell Inc. DEBUG:Quirk doesn't match DEBUG:Processing quirk ThinkPad T420s DEBUG:Failure to match Gigabyte Technology Co., Ltd. with LENOVO DEBUG:Quirk doesn't match Removing old nvidia-current-295.40 DKMS files... Loading new nvidia-current-295.40 DKMS files... Error! DKMS tree already contains: nvidia-current-295.40 You cannot add the same module/version combo more than once. dpkg: error processing nvidia-current (--configure): subprocess installed post-installation script returned error exit status 3 Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-31-generic Warning: No support for locale: en_US.utf8

    Read the article

  • HDMI sound gone, can't figure out how to turn it back on

    - by Oli
    I have had an Acer Revo box as a media centre for a while. I recently installed Ubuntu Server (10.10) on it and polished it up with nodm (one of the most simple ways to launch an X session) and installed boxee. It's been working fine for over a month. It's just running ALSA. I've had problems with PulseAudio/Boxee/HDMI before so I wanted to keep it simple. And that worked. It pushed both PCM and digital (AAC and various Dolby codecs) over HDMI perfectly. But I restarted it the other day after mucking around with some nfs configuration and now there isn't any sound. The hardware is an ION chipset. Nvidia 9400M graphics with Nvidia MCP79/7A audio. One thing I have noticed is there doesn't appear to be any sign of a IEC958 device. A traditional fix in the past for fresh installs has been to load alsamixer, find the IEC device and toggle its mute but I can't. I'm certain this used to represent the HDMI output. It just doesn't seem to exist any more unless I run sudo alsa-utils restart while boxee is running, when I see it in an error message: * Shutting down ALSA... [ OK ] * Setting up ALSA... * warning: 'alsactl restore' failed with error message 'alsactl: set_control:1388: Cannot write control '2:0:0:IEC958 Playback Default:0' : Operation not permitted'... ...done. When nodm (and thus boxee) aren't running, I don't see this error but alsamixer still doesn't show the IEC channel. aplay -l gives: card 0: NVidia [HDA NVidia], device 0: ALC662 rev1 Analog [ALC662 rev1 Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: NVidia [HDA NVidia], device 3: HDMI 0 [HDMI 0] Subdevices: 0/1 Subdevice #0: subdevice #0 Its section in lshw reads: *-multimedia description: Audio device product: MCP79 High Definition Audio vendor: nVidia Corporation physical id: 8 bus info: pci@0000:00:08.0 version: b1 width: 32 bits clock: 66MHz capabilities: pm bus_master cap_list configuration: driver=HDA Intel latency=0 maxlatency=5 mingnt=2 resources: irq:22 memory:fae78000-fae7bfff I was running on the stock PAE kernel but now it's running on 2.6.37.1. I upgraded to see if that fixed things; it didn't. I'm considering a reinstall but I hate doing that because a) there's a bit of custom configuration in getting X and Boxee to start on boot and b) I don't know what the problem is. If I reinstall this time, I'll end up doing that every time the sound breaks. I love Ubuntu but I don't want to install it once a month. Is there any way to forcibly reset all alsa settings and restart from scratch (without doing a reinstall)? Any other tips? If you need more information, just ask.

    Read the article

  • determine if udp socket can be accessed via external client

    - by JohnMerlino
    I don't have access to company firewall server. but supposedly the port 1720 is open on my one ubuntu server. So I want to test it with netcat: sudo nc -ul 1720 The port is listening on the machine ITSELF: sudo netstat -tulpn | grep nc udp 0 0 0.0.0.0:1720 0.0.0.0:* 29477/nc The port is open and in use on the machine ITSELF: lsof -i -n -P | grep 1720 gateway 980 myuser 8u IPv4 187284576 0t0 UDP *:1720 Checked the firewall on current server: sudo ufw allow 1720/udp Skipping adding existing rule Skipping adding existing rule (v6) sudo ufw status verbose | grep 1720 1720/udp ALLOW IN Anywhere 1720/udp ALLOW IN Anywhere (v6) But I try echoing data to it from another computer (I replaced the x's with the real integers): echo "Some data to send" | nc xx.xxx.xx.xxx 1720 But it didn't write anything. So then I try with telnet from the other computer as well: telnet xx.xxx.xx.xxx 1720 Trying xx.xxx.xx.xxx... telnet: connect to address xx.xxx.xx.xxx: Operation timed out telnet: Unable to connect to remote host Although I don't think telnet works with udp sockets. I ran nmap from another computer within the same local network and this is what I got: sudo nmap -v -A -sU -p 1720 xx.xxx.xx.xx Starting Nmap 5.21 ( http://nmap.org ) at 2013-10-31 15:41 EDT NSE: Loaded 36 scripts for scanning. Initiating Ping Scan at 15:41 Scanning xx.xxx.xx.xx [4 ports] Completed Ping Scan at 15:41, 0.10s elapsed (1 total hosts) Initiating Parallel DNS resolution of 1 host. at 15:41 Completed Parallel DNS resolution of 1 host. at 15:41, 0.00s elapsed Initiating UDP Scan at 15:41 Scanning xtremek.com (xx.xxx.xx.xx) [1 port] Completed UDP Scan at 15:41, 0.07s elapsed (1 total ports) Initiating Service scan at 15:41 Initiating OS detection (try #1) against xtremek.com (xx.xxx.xx.xx) Retrying OS detection (try #2) against xtremek.com (xx.xxx.xx.xx) Initiating Traceroute at 15:41 Completed Traceroute at 15:41, 0.01s elapsed NSE: Script scanning xx.xxx.xx.xx. NSE: Script Scanning completed. Nmap scan report for xtremek.com (xx.xxx.xx.xx) Host is up (0.00013s latency). PORT STATE SERVICE VERSION 1720/udp closed unknown Too many fingerprints match this host to give specific OS details Network Distance: 1 hop TRACEROUTE (using port 1720/udp) HOP RTT ADDRESS 1 0.13 ms xtremek.com (xx.xxx.xx.xx) Read data files from: /usr/share/nmap OS and Service detection performed. Please report any incorrect results at http://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 2.04 seconds Raw packets sent: 27 (2128B) | Rcvd: 24 (2248B). The only thing I can think of is a firewall or vpn issue. Is there anything else I can check for before requesting that they look at the firewall server again?

    Read the article

  • Why enumerator structs are a really bad idea (redux)

    - by Simon Cooper
    My previous blog post went into some detail as to why calling MoveNext on a BCL generic collection enumerator didn't quite do what you thought it would. This post covers the Reset method. To recap, here's the simple wrapper around a linked list enumerator struct from my previous post (minus the readonly on the enumerator variable): sealed class EnumeratorWrapper : IEnumerator<int> { private LinkedList<int>.Enumerator m_Enumerator; public EnumeratorWrapper(LinkedList<int> linkedList) { m_Enumerator = linkedList.GetEnumerator(); } public int Current { get { return m_Enumerator.Current; } } object System.Collections.IEnumerator.Current { get { return Current; } } public bool MoveNext() { return m_Enumerator.MoveNext(); } public void Reset() { ((System.Collections.IEnumerator)m_Enumerator).Reset(); } public void Dispose() { m_Enumerator.Dispose(); } } If you have a look at the Reset method, you'll notice I'm having to cast to IEnumerator to be able to call Reset on m_Enumerator. This is because the implementation of LinkedList<int>.Enumerator.Reset, and indeed of all the other Reset methods on the BCL generic collection enumerators, is an explicit interface implementation. However, IEnumerator is a reference type. LinkedList<int>.Enumerator is a value type. That means, in order to call the reset method at all, the enumerator has to be boxed. And the IL confirms this: .method public hidebysig newslot virtual final instance void Reset() cil managed { .maxstack 8 L_0000: nop L_0001: ldarg.0 L_0002: ldfld valuetype [System]System.Collections.Generic.LinkedList`1/Enumerator<int32> EnumeratorWrapper::m_Enumerator L_0007: box [System]System.Collections.Generic.LinkedList`1/Enumerator<int32> L_000c: callvirt instance void [mscorlib]System.Collections.IEnumerator::Reset() L_0011: nop L_0012: ret } On line 0007, we're doing a box operation, which copies the enumerator to a reference object on the heap, then on line 000c calling Reset on this boxed object. So m_Enumerator in the wrapper class is not modified by the call the Reset. And this is the only way to call the Reset method on this variable (without using reflection). Therefore, the only way that the collection enumerator struct can be used safely is to store them as a boxed IEnumerator<T>, and not use them as value types at all.

    Read the article

  • Handling bugs, quirks, or annoyances in vendor-supplied headers

    - by supercat
    If the header file supplied by a vendor of something with whom one's code must interact is deficient in some way, in what cases is it better to: Work around the header's deficiencies in the main code Copy the header file to the local project and fix it Fix the header file in the spot where it's stored as a vendor-supplied tool Fix the header file in the central spot, but also make a local copy and try to always have the two match Do something else As an example, the header file supplied by ST Micro for the STM320LF series contains the lines: typedef struct { __IO uint32_t MODER; __IO uint16_t OTYPER; uint16_t RESERVED0; .... __IO uint16_t BSRRL; /* BSRR register is split to 2 * 16-bit fields BSRRL */ __IO uint16_t BSRRH; /* BSRR register is split to 2 * 16-bit fields BSRRH */ .... } GPIO_TypeDef; In the hardware, and in the hardware documentation, BSRR is described as a single 32-bit register. About 98% of the time one wants to write to BSRR, one will only be interested in writing the upper half or the lower half; it is thus convenient to be able to use BSSRH and BSSRL as a means of writing half the register. On the other hand, there are occasions when it is necessary that the entire 32-bit register be written as a single atomic operation. The "optimal" way to write it (setting aside white-spacing issues) would be: typedef struct { __IO uint32_t MODER; __IO uint16_t OTYPER; uint16_t RESERVED0; .... union // Allow BSRR access as 32-bit register or two 16-bit registers { __IO uint32_t BSRR; // 32-bit BSSR register as a whole struct { __IO uint16_t BSRRL, BSRRH; };// Two 16-bit parts }; .... } GPIO_TypeDef; If the struct were defined that way, code could use BSRR when necessary to write all 32 bits, or BSRRH/BSRRL when writing 16 bits. Given that the header isn't that way, would better practice be to use the header as-is, but apply an icky typecast in the main code writing what would be idiomatically written as thePort->BSRR = 0x12345678; as *((uint32_t)&(thePort->BSSRH)) = 0x12345678;, or would be be better to use a patched header file? If the latter, where should the patched file me stored and how should it be managed?

    Read the article

  • "Hello World" in C++ AMP

    - by Daniel Moth
    Some say that the equivalent of "hello world" code in the data parallel world is matrix multiplication :) Below is the before C++ AMP and after C++ AMP code. For more on what it all means, watch the recording of my C++ AMP introduction (the example below is part of the session). void MatrixMultiply(vector<float>& vC, const vector<float>& vA, const vector<float>& vB, int M, int N, int W ) { for (int y = 0; y < M; y++) { for (int x = 0; x < N; x++) { float sum = 0; for(int i = 0; i < W; i++) { sum += vA[y * W + i] * vB[i * N + x]; } vC[y * N + x] = sum; } } } Change the function to use C++ AMP and hence offload the computation to the GPU, and now the calling code (which I am not showing) needs no changes and the overall operation gives you really nice speed up for large datasets…  #include <amp.h> using namespace concurrency; void MatrixMultiply(vector<float>& vC, const vector<float>& vA, const vector<float>& vB, int M, int N, int W ) { array_view<const float,2> a(M, W, vA); array_view<const float,2> b(W, N, vB); array_view<writeonly<float>,2> c(M, N, vC); parallel_for_each( c.grid, [=](index<2> idx) mutable restrict(direct3d) { float sum = 0; for(int i = 0; i < a.x; i++) { sum += a(idx.y, i) * b(i, idx.x); } c[idx] = sum; } ); } Again, you can understand the elements above, by using my C++ AMP presentation slides and recording… Stay tuned for more… Comments about this post welcome at the original blog.

    Read the article

  • How can I get my ATI / AMD drivers to work with any kernel above 3.2.0.x?

    - by TorakTu
    How can I get my ATI / AMD drivers to work with any kernel above 3.2.0.x ? WHAT DID WORK Installed original AMD64 version of Ubuntu 12.04 ISO image. Burned DVD and installed which shown kernel 3.2.0-23 to begin with. Got 5.1 surround sound working. Got ATI ( Now its AMD ) video drivers installed for my Radeon HD R6870 Video card from AMD's website. fglrxinfo came up and reported as normal. THE PROBLEM Kernel 3.2.0.x kept locking up so I tried higher kernel versions. But ATI / AMD Drivers do not install on any kernel Above 3.2.0.x WHAT I HAVE TRIED I have gone over this tutorial many times ( https://help.ubuntu.com/community/BinaryDriverHowto/ATI ) and it doesn't work on ANY kernel except 3.2.0.x. The problems I am having here are that the ATI / AMD drivers working for the 12.04 Precise with kernel 3.2.0-23 and 24, But the computer kept locking up. Although all my games would work, the lock ups were random and were constant. So I looked all over the web for 3 days trying to find an answer and the lock up issue was said to just update the kernel. So I did. Have tried many kernels. All of them .. no lock ups. BUT the Restricted AMD drivers from the AMD website will not install. And none of the OpenSource AMD drivers have EVER installed no matter what Kernel or version I tried. EXAMPLE OUTPUT OF 3D TYPE OF ERRORS Javax.media.opengl.GLException: glXGetConfig failed: error code GLX_NO_EXTENSION at com.sun.opengl.impl.x11.X11GLDrawableFactory.glXGetConfig(X11GLDrawableFactory.java:651) at com.sun.opengl.impl.x11.X11GLDrawableFactory.xvi2GLCapabilities(X11GLDrawableFactory.java:350) at com.sun.opengl.impl.x11.X11GLDrawableFactory.chooseGraphicsConfiguration(X11GLDrawableFactory.java:174) at javax.media.opengl.GLCanvas.chooseGraphicsConfiguration(GLCanvas.java:520) at javax.media.opengl.GLCanvas.<init>(GLCanvas.java:131) at haven.HavenPanel.<init>(HavenPanel.java:68) at haven.HavenPanel.<init>(HavenPanel.java:78) at haven.MainFrame.<init>(MainFrame.java:182) at haven.MainFrame.main2(MainFrame.java:306) at haven.MainFrame.access$100(MainFrame.java:34) at haven.MainFrame$7.run(MainFrame.java:360) at java.lang.Thread.run(Thread.java:722) And of course this is what fglrxinfo shows : X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 139 (ATIFGLEXTENSION) Minor opcode of failed request: 66 () Serial number of failed request: 13 Current serial number in output stream: 13 EDIT : I forgot to mention that I DID look at this post over the last few days and it did not help.

    Read the article

  • Update Errors in Xubuntu 12.10

    - by wil
    I updated by computer from 12.04 to 12.10 and after I finished updating when I turned on my computer I am unable to update my computer. I tried install a new copy of 13.04 but my cpu doesn't support pae. I have a IBM Thinkpad T42 with a 1.7 gigahertx Cpu. When updating through terminal This is the output. sudo apt-get upgrade: [sudo] password for wil: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: linux-image-extra-3.5.0-34-generic : Depends: linux-image-3.5.0-34-generic but it is not installed linux-image-generic : Depends: linux-image-3.5.0-34-generic but it is not installed E: Unmet dependencies. Try using -f. sudo apt-get upgrade -f: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following NEW packages will be installed: linux-image-3.5.0-34-generic 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. 3 not fully installed or removed. Need to get 0 B/11.8 MB of archives. After this operation, 25.9 MB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 191530 files and directories currently installed.) Unpacking linux-image-3.5.0-34-generic (from .../linux-image-3.5.0-34-generic_3.5.0-34.55_i386.deb) ... This kernel does not support a non-PAE CPU. dpkg: error processing /var/cache/apt/archives/linux-image-3.5.0-34-generic_3.5.0-34.55_i386.deb (--unpack): subprocess new pre-installation script returned error exit status 1 No apport report written because MaxReports is reached already Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.5.0-34-generic /boot/vmlinuz-3.5.0-34-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.5.0-34-generic /boot/vmlinuz-3.5.0-34-generic Errors were encountered while processing: /var/cache/apt/archives/linux-image-3.5.0-34-generic_3.5.0-34.55_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) wil@wil-ThinkPad-T42:~/Desktop$

    Read the article

  • Does the method of adjustment matter, or just the final calibration?

    - by Steve
    A company produces software (and hardware) that is used to both perform automatic adjustments on electronic test equipment as well as perform calibrations of the same equipment. The results of the calibrations are put onto a certificate of calibration that is sent to the customer along with the equipment. This calibration certificate states various conditions of the calibration, such as what hardware (models/serial numbers) and software (version) was used to perform the calibration, as well as things like environmental conditions, etc. Making the assumption that the software used to produce the data (and listed on the calibration certificate) used on the certificate of calibration must have gone through a "test/release" process and must be considered "released" software - does this also mean that the software used for adjustment must also be released? I believe that the method (software/environmental conditions/etc) used or present during adjustment doesn't matter, all that really matters is the end result of the calibration, the conditions present during the calibration, and whether or not the equipment was within the specifications. The real question I'm hoping to get answered: Is there a reputable source (e.g. NIST or somewhere similar) that addresses this question? (I have searched...) The thinking is that during high volume production runs, the "unreleased" system can be used to perform adjustments, as long as a released system is used to perform the calibrations, since the time required to perform the adjustments is much longer than the calibration. This unreleased system will eventually become released for use, but currently is not. Also, please not that there is a distinction between "adjustment" and "calibration". The definition from BIPM International vocabulary of metrology, 2.39: Operation that, under specified conditions, in a first step, establishes a relation between the quantity values with measurement uncertainties provided by measurement standards and corresponding indications with associated measurement uncertainties (of the calibrated instrument or secondary standard) and, in a second step, uses this information to establish a relation for obtaining a measurement result from an indication. Followed by NOTE 2 (emphasis in original text): Calibration should not be confused with adjustment of a measuring system, often mistakenly called "self-calibration", nor with verification of calibration As a side note, I'm not sure why this got down voted. It's regarding software and it's use before and after release for use. I believe there is a best practice that can be applied and this is (hopefully) not primarily opinion based.

    Read the article

  • C# 5: At last, async without the pain

    - by Alex.Davies
    For me, the best feature in Visual Studio 11 is the async and await keywords that come with C# 5. I am a big fan of asynchronous programming: it frees up resources, in particular the thread that a piece of code needs to run in. That lets that thread run something else, while waiting for your long-running operation to complete. That's really important if that thread is the UI thread, or if it's holding a lock because it accesses some data structure. Before C# 5, I think I was about the only person in the world who really cared about asynchronous programming. The trouble was that you had to go to extreme lengths to make code asynchronous. I would forever be writing methods that, instead of returning a value, accepted an extra argument that is a "continuation". Then, when calling the method, I'd have to pass a lambda in to it, which contained all the stuff that needed to happen after the method finished. Here is a real snippet of code that is in .NET Demon: m_BuildControl.FilterEnabledForBuilding(     projects,     enabledProjects = m_OutOfDateProjectFinder.FilterNeedsBuilding(         enabledProjects,         newDirtyProjects =         {             // Mark any currently broken projects as dirty             newDirtyProjects.UnionWith(m_BrokenProjects);             // Copy what we found into the set of dirty things             m_DirtyProjects = newDirtyProjects;             RunSomeBuilds();         })); It's just obtuse. Who puts a lambda inside a lambda like that? Well, me obviously. But surely enabledProjects should just be the return value of FilterEnabledForBuilding? And newDirtyProjects should just be the return value of FilterNeedsBuilding? C# 5 async/await lets you write asynchronous code without it looking so stupid. Here's what I plan to change that code to, once we upgrade to VS 11: var enabledProjects = await m_BuildControl.FilterEnabledForBuilding(projects); var newDirtyProjects = await m_OutOfDateProjectFinder.FilterNeedsBuilding(enabledProjects); // Mark any currently broken projects as dirty newDirtyProjects.UnionWith(m_BrokenProjects); // Copy what we found into the set of dirty things m_DirtyProjects = newDirtyProjects; RunSomeBuilds(); Much easier to read! But how is this the same code? If we were on the UI thread, doesn't the UI thread have to block while FilterEnabledForBuilding runs? No, it doesn't, and that's the magic of the await keyword! It cuts your method up into its constituent pieces, much like I did manually with lambdas before. When you run it, only the piece up to the first await actually runs. The rest is passed to FilterEnabledForBuilding as a continuation, which will get called back whenever that method is finished. In the meantime, our thread returns, and can go back to making the UI responsive, or whatever else threads do in their spare time. This is actually a massive simplification, and if you're interested in all the gory details, and speed hacks that the await keyword actually does for you, I recommend Jon Skeet's blog posts about it.

    Read the article

  • External hard drive issue

    - by Blind Fish
    I am running Ubuntu 12.04 and I have a 500GB external hard drive, formatted in ext4, which I have used for about a year to store batches of data that I am sifting through. About once a week I move a chunk of data from the external drive to my internal drive for processing. As I do that I delete the data from the external drive and empty the trash, thereby clearing up space on the external and also preventing myself from grabbing stuff that I've already sorted through. The vast majority of this is text files, but there are a few .jpegs and .mp4s thrown in, if that matters. Anyway, this has been working without a hitch for nearly a year. So today I plug in the drive and I have an odd issue. I was able to move folders from the external over to my internal drive with no problem, but when I tried to delete those files I was unable to do so. I kept getting the "unable to send file to trash, do you wish to delete permanently?" message. I clicked yes / delete all, but no files were actually deleted. It just sat there. Even worse, while the system was trying in vain to delete those files I was unable to move more files over. In short, I was stuck. I canceled the operation, unmounted and remounted the drive, and then I had an even bigger problem. I have a spreadsheet of all the different folders on there ( exactly 818 ) and yet when I opened the drive in Nautilus, it was only finding 512 folders. So I had 306 missing folders, but my free space was unchanged. Immediately I think that the drive may be corrupted. I went into Disk Utility and ran the check disk option. It said that the drive was NOT clean, but offered no remedy or option to repair. I went back into the drive in Nautilus and once again attempted to delete the files I had already moved. Same issue. I clicked on Details and it said that it was unable to create a trashing file I/O error. I've looked, and it's not a permissions issue. The drive and all folders that are in it all belong to me, and they are all read / write. I've started running badblocks on the drive, but it looks like that is going to take several hours to complete. Any ideas?

    Read the article

  • Will Online Learning Save Higher Education (and does it need saving)?

    - by user739873
    A lot (an awful lot) of education industry rag real estate has been devoted to the topics of online learning, MOOC’s, Udacity, edX, etc., etc. and to the uninitiated you’d think that the education equivalent of the cure for cancer had been discovered. There are certainly skeptics (whose voice is usually swiftly trampled upon by the masses) who feel we could over steer and damage or destroy something vital to teaching and learning (i.e. the classroom experience and direct interaction with human beings known as instructors), but for the most part prevailing opinion seems to be that online learning will take over the world and that higher education will never be the same. Now I’m sure that since you all know I work for a technology company you think I’m going to come down hard on the side of online learning proselytizers. Yes, I do believe that this revolution can and will provide access to massive numbers of individuals that either couldn’t afford (from a fiscal or time perspective) a traditional education, and that in some cases the online modality will actually be an improvement over certain traditional forms (such as courses taught by an adjunct or teaching assistant that has no business being a teacher). But I think several things need immediate attention or we’re likely to get so caught up in the delivery that we miss some of the real issues (and opportunities) around online learning. First and foremost, we’ve got to give some thought to how traditional information systems are going to accommodate thousands (possibly hundreds of thousands) of individual students each taking courses from many, many different “deliverers” with an expectation that successful completion of these courses will result in credit at many or most institutions. There’s also a huge opportunity to refine the delivery platform (no, LMS is not a commodity when you are talking about online delivery being your sole mode of operation) as well as the course itself by mining all kinds of data from the interactions that the students have with the material each time they take it. Social data analytics tools will be key in achieving this goal. What about accreditation (badging or competencies vs. traditional degrees)? And again, will the information systems in place today adapt to changes in this area fast enough? The type of scale that this shift in learning could drive has the potential to abruptly overwhelm just about every system in place today in higher education. I would like to (with a not so gentle reminder) refer you back to a blog entry I wrote when I first stepped into my current role at Oracle in which I talked about how higher ed needs an “Oracle” more than at any other time in it’s evolution (despite the somewhat mercantilist reputation it has in some circles). There just aren’t that many organizations that can deliver the kinds of solutions “at scale” that this brave new world of online education will demand. The future may be closer than we think. Cole

    Read the article

  • I cannot remove/install software some dependency issue?

    - by Ryuzaki
    I'm having difficulty trying to install/uninstall applications in Ubuntu 12.04 LTS. I have this warning error that appears on my desktop in the form of a red icon with a line through it and it states: An error occurred, please run Package Manager from the right-click menu or apt-get in a terminal to see what is wrong. The error message was: 'Error: BrokenCount 0' — This usually means that your installed packages have unmet dependencies. I tried to repair automatically through ubuntu software center and I kept getting errors or it just didn't seem to work. Afterwards, I opened terminal and used sudo apt-get check command to see what the problem was and the results yielded: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: libjack-jackd2-0 : Breaks: libjack-jackd2-0:i386 (!= 1.9.8~dfsg.1-1ubuntu1) but 1.9.8~dfsg.2-1precise1 is installed libjack-jackd2-0:i386 : Breaks: libjack-jackd2-0 (!= 1.9.8~dfsg.2-1precise1) but 1.9.8~dfsg.1-1ubuntu1 is installed E: Unmet dependencies. Try using -f. After I ran sudo apt-get -f install (in an attempt to fix the issue at hand), I encountered the following errors/code: okudaira@haru-kano:~$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libjack-jackd2-0 Suggested packages: jackd2 The following packages will be upgraded: libjack-jackd2-0 1 upgraded, 0 newly installed, 0 to remove and 24 not upgraded. 1 not fully installed or removed. Need to get 0 B/197 kB of archives. After this operation, 3,072 B of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 181702 files and directories currently installed.) Preparing to replace libjack-jackd2-0 1.9.8~dfsg.1-1ubuntu1 (using .../libjack-jackd2-0_1.9.8~dfsg.2-1precise1_amd64.deb) ... Unpacking replacement libjack-jackd2-0 ... dpkg: error processing /var/cache/apt/archives/libjack-jackd2-0_1.9.8~dfsg.2-1precise1_amd64.deb (--unpack): './usr/share/doc/libjack-jackd2-0/buildinfo.gz' is different from the same file on the system dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/libjack-jackd2-0_1.9.8~dfsg.2-1precise1_amd64.deb localepurge: Disk space freed in /usr/share/locale: 0 KiB localepurge: Disk space freed in /usr/share/man: 0 KiB localepurge: Disk space freed in /usr/share/gnome/help: 0 KiB localepurge: Disk space freed in /usr/share/omf: 0 KiB Total disk space freed by localepurge: 0 KiB E: Sub-process /usr/bin/dpkg returned an error code (1) The real problem I'm having is interpreting what is being stated. I'm fairly new to the ubuntu experience so I'm not very well-versed with the terminology and the entire in's and out's. Can someone tell me what's wrong with my system? I can no longer install/remove any programs at all.

    Read the article

  • How to customize web app template admintasia? [closed]

    - by Balaji
    The below is my requirement, Login page Login page for all To search whether a user-id & password are a valid or not. To identify whether the user is of which category, namely, Admin, Manager, User. Redirect the page accordingly for respective user with their privilege. Admin - Manager – User Forgot password, Get username of email address and send the credential by validating the user input. Login & Operations Admin Display all submitted forms by admin, manager and user Tabs – Submissions, Mange form & Manage users. Submissions There must be a ‘sort’, listing, admin, manager and user. Username column should hyper link to edit the user account There must be a ‘sort’ option as ‘ALL’ while this option is chosen; the admin must be capable to view, as below, If Admin selects ‘admin’ from dropdown all submitted forms by admin user accounts should display and this operation is similar for the manager account as above. Manage forms, Agent Name, Credit Site, Lenders, Type of Loan, No Scores and Type Of Card Add, edit, update, delete Active and inactive Always display active records. Manage users, Create a user, User account type, (Admin or Manager or User) User email option Edit & update user, Change username Prompt: are you sure? Change password Prompt: are you sure? Delete user Prompt: are you sure you want to delete? Edit – value – this value should be updated in the common form used by all type of users. Manager Display all submitted forms by managers and users. Submissions and Manage users, Submissions Operations Admin Create filter values, admin, manager and user. Edit - filter values, admin, manager and user. Delete filter values, admin, manager and user. View. Manager Create a user only. View forms. User view and/ or submit form only Logout Log out for all. Prompt: Are you sure you want to log-out. I have downloaded adminstasia. How do I customize this template for the above requirement? The installation is same alike if you check this URL, http://www.admintasia.com/demo/

    Read the article

  • Profiling Startup Of VS2012 &ndash; Ants Profiler

    - by Alois Kraus
    I just downloaded ANTS Profiler 7.4 to check how fast it is and how deep I can analyze the startup of Visual Studio 2012. The Pro version which is useful does cost 445€ which is ok. To measure a complex system I decided to simply profile VS2012 (Update 1) on my older Intel 6600 2,4GHz with 3 GB RAM and a 32 bit Windows 7. Ants Profiler is really easy to use. So lets try it out. The Ants Profiler does want to start the profiled application by its own which seems to be rather common. I did choose Method Level timing of all managed methods. In the configuration menu I did want to get all call stacks to get full details. Once this is configured you are ready to go.   After that you can select the Method Grid to view Wall Clock Time in ms. I hate percentages which are on by default because I do want to look where absolute time is spent and not something else.   From the Method Grid I can drill down to see where time is spent in a nice and I can look at the decompiled methods where the time is spent. This does really look nice. But did you see the size of the scroll bar in the method grid? Although I wanted all call stacks I do get only about 4 pages of methods to drill down. From the scroll bar count I would guess that the profiler does show me about 150 methods for the complete VS startup. This is nonsense. I will never find a bottleneck in VS when I am presented only a fraction of the methods that were actually executed. I have also tried in the configuration window to also profile the extremely trivial functions but there was no noticeable difference. It seems that the Ants Profiler does filter away way too many details to be useful for bigger systems. If you want to optimize a CPU bound operation inside NUnit then Ants Profiler is with its line level timings a very nice tool to work with. But for bigger stuff it is certainly not usable. I also do not like that I must start the profiled application from the profiler UI. This makes it hard to profile processes which are started by some other process. Next: JetBrains dotTrace

    Read the article

  • How to perform Cross Join with Linq

    - by berthin
    Cross join consists to perform a Cartesian product of two sets or sequences. The following example shows a simple Cartesian product of the sets A and B: A (a1, a2) B (b1, b2) => C (a1 b1,            a1 b2,            a2 b1,            a2, b2 ) is the Cartesian product's result. Linq to Sql allows using Cross join operations. Cross join is not equijoin, means that no predicate expression of equality in the Join clause of the query. To define a cross join query, you can use multiple from clauses. Note that there's no explicit operator for the cross join. In the following example, the query must join a sequence of Product with a sequence of Pricing Rules: 1: //Fill the data source 2: var products = new List<Product> 3: { 4: new Product{ProductID="P01",ProductName="Amaryl"}, 5: new Product {ProductID="P02", ProductName="acetaminophen"} 6: }; 7:  8: var pricingRules = new List<PricingRule> 9: { 10: new PricingRule {RuleID="R_1", RuleType="Free goods"}, 11: new PricingRule {RuleID="R_2", RuleType="Discount"}, 12: new PricingRule {RuleID="R_3", RuleType="Discount"} 13: }; 14: 15: //cross join query 16: var crossJoin = from p in products 17: from r in pricingRules 18: select new { ProductID = p.ProductID, RuleID = r.RuleID };   Below the definition of the two entities using in the above example.   1: public class Product 2: { 3: public string ProductID { get; set; } 4: public string ProductName { get; set; } 5: } 1: public class PricingRule 2: { 3: public string RuleID { get; set; } 4: public string RuleType { get; set; } 5: }   Doing this: 1: foreach (var result in crossJoin) 2: { 3: Console.WriteLine("({0} , {1})", result.ProductID, result.RuleID); 4: }   The output should be similar on this:   ( P01   -    R_1 )   ( P01   -    R_2 )   ( P01   -    R_3 )   ( P02   -    R_1 )   ( P02   -    R_2 )   ( P02   -    R_3) Conclusion Cross join operation is useful when performing a Cartesian product of two sequences object. However, it can produce very large result sets that may caused a problem of performance. So use with precautions :)

    Read the article

< Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >