Search Results

Search found 17976 results on 720 pages for 'old versions'.

Page 519/720 | < Previous Page | 515 516 517 518 519 520 521 522 523 524 525 526  | Next Page >

  • ZFS pool broken after upgrading to 14.04 LTS

    - by cruiserparts
    Well, I have been putting off upgrading to 14.04 for fear that I would break something. Actually for fear that it would break zfs (or I would break it). I am bascially slightly better than novice at linux. Spent the last couple of hours trying to get the pool back. Now I am at the stage where I don't think I have a complete failure, but I am worried that I may break it. So if could help me not break it, and recover it, I would be thankful. My zfs is file storage and not boot. It was working fine for a year and was working perfectly before the upgrade (scrub and everything was fine). I was confident that the upgrade would work (or at least I could fix it) because I had upgraded once in the past, the pool went missing, but I was able to get it back. I have reinstalled zfs, zfs utilities, and some dependencies (after searching this forum) I think what happened is 14.04 deleted some config file, or specified disk names differntly, but I could be wrong. When I set the pool up originally, I was using specific device Ids as I recall (because I did not want to break things if they got reassigned at boot) So see if this helps. I can confirm that old mountpoint folders are there but empty. no talloc stackframe at ../source3/param/loadparm.c:4864, leaking memory pool: naspool1 state: UNAVAIL status: One or more devices could not be used because the label is missing or invalid. There are insufficient replicas for the pool to continue functioning. action: Destroy and re-create the pool from a backup source. see: http://zfsonlinux.org/msg/ZFS-8000-5E scan: none requested config: NAME STATE READ WRITE CKSUM naspool1 UNAVAIL 0 0 0 insufficient replicas raidz1-0 UNAVAIL 0 0 0 insufficient replicas scsi-SATA_WDC_WD1001FALS-_WD-WMATV0990825 UNAVAIL 0 0 0 scsi-SATA_WDC_WD1001FALS-_WD-WMATV2995365 UNAVAIL 0 0 0 scsi-SATA_WDC_WD10EARS-00_WD-WMAV51894349 UNAVAIL 0 0 0 ___@ourserver:~$ sudo zpool import naspool1 cannot import 'naspool1': a pool with that name is already created/imported, and no additional pools with that name were found ___@ourserver:~$ sudo zfs list no datasets available What other output can I post to help? I'm thinking the update deleted some zfs config files. It seems like the pool exists and certainly 3 perfectly working disks did not fail at once. I am worried that I may break something without a little bit of guideance. Thanks.

    Read the article

  • What is the correct way to use g_signal_connect() in C++ for dynamic unity quicklists?

    - by hakermania
    I want to make my application use dynamic unity quicklists. For building my application I am using C++ and the QtCreator IDE. When a menu action is triggered I want to be able to have access to a non-static function of my MainWindow class so as to be able to update the Graphical User Interface which can be accessed from inside 'normal' MainWindow's functions. So, I am building up my quicklist like this (mainwindow.cpp): void MainWindow::enable_unity_quicklist(){ Unity_Menu = dbusmenu_menuitem_new(); dbusmenu_menuitem_property_set_bool (Unity_Menu, DBUSMENU_MENUITEM_PROP_VISIBLE, FALSE); Unity_Stop = dbusmenu_menuitem_new(); dbusmenu_menuitem_property_set(Unity_Stop, DBUSMENU_MENUITEM_PROP_LABEL, "Stop"); dbusmenu_menuitem_child_append (Unity_Menu, Unity_Stop); g_signal_connect (Unity_Stop, DBUSMENU_MENUITEM_SIGNAL_ITEM_ACTIVATED, G_CALLBACK(&fake_callback), (gpointer)this); if(!unity_entry) unity_entry = unity_launcher_entry_get_for_desktop_id("myapp.desktop"); unity_launcher_entry_set_quicklist(unity_entry, Unity_Menu); dbusmenu_menuitem_property_set_bool(Unity_Menu, DBUSMENU_MENUITEM_PROP_VISIBLE, true); dbusmenu_menuitem_property_set_bool(Unity_Stop, DBUSMENU_MENUITEM_PROP_VISIBLE, true); } void MainWindow::fake_callback(gpointer data){ MainWindow* m = (MainWindow*)data; m->on_stopButton_clicked(); } void MainWindow::on_stopButton_clicked(){ //stopping the process... } mainwindow.h: private slots: void enable_unity_quicklist(); void on_stopButton_clicked(); public slots: static void fake_callback(gpointer data); This suggestion was taken from http://old.nabble.com/Using-g_signal_connect-in-class-td18461823.html The program crashes immediately after I choose the 'Stop' action from the Unity Quicklist. Debugging the program shows that I am not able to access anything MainWindow related inside the on_stopButton_clicked() without crashing. For example, it crashes when doing this check (which is the first 2 lines of code inside this function): if (!ui->stopButton->isEnabled()) return; I have also tested lots of other things that I found at the internet, but nothing of them worked. One interesting solution would be to use gtkmm (http://developer.gnome.org/gtkmm-tutorial/stable/sec-connecting-signal-handlers.html.en) but I am not used at all working on GTK applications (I work solely in Qt) and I don't know if this even suits to my occasion. A compilable example indicating what the problem is can be found at: http://ubuntuone.com/7iKA3wnPmWVp8YNNDLlVQI (3.2Kb) If you are not familiar with the QtCreator IDE, you can compile with the following commands, as long as you have all the needed libraries: cd dynamic_unity_quicklists_test; qmake -project; qmake; make

    Read the article

  • Geek Bike Ride JavaOne 2012

    - by Tori Wieldt
    "Geek Bike Ride?" the clerk at the bike rental shop asked. "Are you guys all from the same company?" "We aren't even from the same country!" we answered. "I'm from Russia." "We're from Germany."  "I'm from Belgium." "I'm from Palo Alto." "I'm from Japan."  "We're from Brazil." "We're from Brazil." "I'm from Sweden." "Coooool" was all she could say. She was right. The Geek Bike Ride was cooool. We had 39 bike riders and one skater show up Saturday for a great route from San Francisco's Fisherman's Wharf, across the Golden Gate bridge, to Saulsalito, and back to the city by ferry. Duke Bike jerseys, sponsored by OTN, were given out. To make sure Java developers got them, each person had to answer a Java question to get a jersey. The questions were really hard, like "Who is the Father of Java?" "What's the biggest Java conference in San Francisco?" The best was when the question was "Name one of Duke's Choice Award winner from this year," and Régina ten Bruggencate answered answered "Me!"  It was foggy throughout the day, with the sun poking out occasionally. The fog was thickest on the bridge, more that one rider commented that we were "in the cloud." It was a great day to meet new friends, and have a chat with old friends. We all had fun, though some of us may more a little more slowly during JavaOne. Ride on!  Photos by permission by Arun Gupta and Yoshio Terada. Thanks, guys!

    Read the article

  • Managing Joomla via Android

    Surprisingly, it was only today that I actually looked for possible solutions to write more content for my blog. Since quite some time I'm using my Samsung Galaxy Tab 10.1 for all kind of social media activities like Google+, FB, etc. but also for my casual mail during the evening hours. And yes, I feel a little bit guilty about missing the chance to use my tablet to write some content here... OK, only a little bit. ;-) These are not the droids you are looking for But those lazy times are over! While searching the Play Store with the expression 'joomla' I got three interesting hits: - Joomla Admin Mobile! - Joooid - Joomla! Security Checklist After reading the reviews I installed the two later apps. Joomla! Security Checklist The author clearly outlines here that the app is primarily for his personal purpose to have safety checklist at hand at anytime. I guess that any reader of this article has an Android based smartphone or tablet, so that simple app should be part of your toolbox when using Joomla! for your websites. Joooid plugin & app Although I was looking for an app that could work with the default XML RPC interface of Joomla I have to admit that this combination of an enhanced Web service suits me better, mainly due to performance reason. The official website has not only the downloads for Joomla versions 1.5 - 2.5 but also very good and easy to follow step-by-step instructions to prepare your server for the Android app. It will take you less than 5 minutes to get it up and running. For safety reasons, I recommend that you should configure your Web server to have an additional authentication layer on the plugins folder. The smartphone app has the ability to run against HTTP authentication. Personally, I like the look and feel of the app. It is a little bit different compared to the web UI but still easy to use. In fact, this article is the first one written in the Joooid app. At the moment, I only miss the ability to have list tags. Quick and easy Writing full-fledged articles with images, a couple of hyperlinks and some styling here and there should be left to the desktop. At least for the moment. Let's see whether I'm going to change my mind on this during the upcoming months... I'll give it a try, and hope to publish at least once per month to write some content using Joooid. Actually, it would be great to have some feedback about other Joomla! clients in the wild.

    Read the article

  • Installed LibreOffice 4 with ppa, how do I remove it and go back to LibreOffice 3?

    - by MMA
    EDIT This question is not at all a duplicate of How to downgrade from LibreOffice 4.0 to 3.6? The above mentioned question talks about downgrading from a specific version of LibreOffice, namely from 4.0 to 3.6. The solutions mentioned are not the ones I am looking for. They will work but I wanted a general solution without using PPA or downloading .deb files for from a higher version to a lower version. The above solutions suggest either downloading .deb files for LibreOffice 3.6 or adding repository for it. Furthermore, some of the answers put out-of-proportion~(applicable for the solution, however) stress on use of synaptic, not general command-line-solution. That made me wonder, at this very moment, if I take a fresh computer, and install Ubuntu 12.04, LibreOffice installation will work without a hitch. Then why I can not install LibreOffice in my 12.04 machine today from simple command line? This answer to my question, clarified everything. I need to use ppa-purge so that this resets all packages from a PPA to the standard versions released for my distribution. Basically it is like a way to restore my system back to the way it was before my installed packages from a PPA. This article further elaborates the idea. The above mentioned answer worked perfectly for me. Actually, this was an education for me since it taught me how do downgrade a package that was added via PPA. I had upgraded from LibreOffice 3 to LibreOffice 4 using the PPA. Now since I found that LibreOffice 4 has some issues, including handling my native language, I want to move back to LibreOffice 3. In order to accomplish this, I removed the LibreOffice config directory from my home and then purged LibreOffice from my machine. sudo apt-get purge libreoffice-* Then I removed the relevant PPA's using the sudo apt-add-repository --remove command. And then ran sudo apt-get update. Now, when I try to install LibreOffice using the command sudo apt-get install libreoffice I get an avalanche of output about unmet dependencies, something like, The following packages have unmet dependencies: libreoffice : Depends: libreoffice-core (= 1:3.5.7-0ubuntu4) but it is not going to be installed (snipped) If I dig the issue further, by using the command, sudo apt-get install libreoffice-core I get The following packages have unmet dependencies: libreoffice-core : Depends: libreoffice-common (> 1:3.5.7) but it is not going to be installed Depends: libexttextcat0 (>= 2.2-8) but it is not going to be installed Depends: ure (>= 3.5.7~) but it is not going to be installed E: Unable to correct problems, you have held broken packages. Could you please tell me how do I install LibreOffice 3 in my machine? I am using Ubuntu 12.04 LTS.

    Read the article

  • Interconnect nodes in a Java distributed infrastructure for tweet processing

    - by David Moreno García
    I'm working in a new version of an old project that I used to download and process user statuses from Twitter. The main problem of that project was its infrastructure. I used multiple instances of a java application (trackers) to download from Twitter given an specific task (basically terms to search for), connected with a central node (a web application) that had to process all tweets once per day and generate a new task for each trackers once each 15 minutes. The central node also had to monitor all trackers and enable/disable them under user petition. This, as I said, was too slow because I had multiple bottlenecks, so in this new version I want to improve the infrastructure and isolate all functionalities in specific nodes. I also need a good notification system to receive notifications for any node. So, in the next diagram I show the components that I'll need in this new version: As you can see, there are more nodes. Here are some notes about them: Dashboard: Controls trackers statuses and send a single task to each of them (under user request). The trackers will use this task until replaced with a new one (if done, not each 15 minutes like before). Search engine: I need to store all the tweets. They are firstly stored in a local database for each tracker but after that I'm thinking on using something like Elasticsearch to be able to do fast searches. Tweet processor: Just and isolated component with its own database (maybe something like the search engine to have fast access to info generated by the module). In the future more could be added. Application UI: A web application with a shared database with the Dashboard (mainly to store users information and preferences). Indeed, both could be merged into a single web. The main difference with the previous version of the project is that now they will be isolated and they will only show information and send requests. I will not do any heavy task in them (like process tweets as I did before). So, having this components, my main headache is how to structure all to not have to rewrite a lot of code every time I need to access any new data. Another headache is how can I interconnect nodes. I could use sockets but that is a pain in the ass. Maybe a REST layer? And finally, if all the nodes are isolated, how could I generate notifications for each user which info is only in the database used by the Application UI? I'm programming this using Java and Spring (at least I used them in the last version) but I have no problems with changing the language if I can take advantage of a tool/library/engine to make my life easier and have a better platform. Any comment will be appreciated.

    Read the article

  • Booting Ubuntu on HP Pavilion g7 - 13.04 [duplicate]

    - by death2040
    This question already has an answer here: My computer boots to a black screen, what options do I have to fix it? 24 answers I have a HP Pavilion G7 with an AMD A4 processor and Radeon graphics. I want to install Ubuntu on my laptop but whenever I put the Ubuntu live CD in it and boot to it, the screen shows the Ubuntu logo and the four little dots then after about a minute or two the screen goes black. I can tell the screen is still on but it doesn't have anything on it. I'm beginning to wonder if its a driver problem but I can't really install the drivers when I cant even get Ubuntu to show anything except a loading screen. I've already tried using 12.04 and 12.10 and all the others down to Ubuntu 10. none of them worked. All the other versions don't even show the Ubuntu logo. I'd prefer to have Ubuntu 13.04 on it if its possible but I haven't had any luck finding a solution. I've also tried using WUBI installer in Windows 7 but all that did was make my computer slower for windows and it does the same with the screen when i boot it to Ubuntu. I'm trying to use Ubuntu alongside Windows 7. I cant find any solution on Google. It wont load anything and I know that there is a program called grub on Ubuntu that I used on my desktop computer when it had graphics trouble but the trouble with my desktop was minor things like the screen would flash and then show weird patterns on the screen. But I can't find anything on what to do with the HP laptop. Please help. I use this laptop a lot for games on Windows 7 and I just want to use Ubuntu for when I take my laptop to school and for school stuff. Edit: I just tried booting it in nomodeset and some other things and still didn't work. It did boot up but now when it goes to install alongside windows it crashes and says Ubuntu is forcing reboot or something like that Also, this question is different from the black screen at boot issue because when I do use nomodeset on my computer and select install Ubuntu it will go as far as the screen where you can choose to replace Windows or run alongside Windows. Then after I click continue it ejects the live CD and turns off my computer without installing anything. The error message it shows when it ejects the disk says signal 15, shutting down - modem manager [1675]: <info> Caught nm-dispatcher.action: Caught signal 15, shutting down... *Deconfiguring network interfaces... Please remove installation media and close the tray (if any) then press ENTER *Deactivating swap... *Stopping remaining crypto disks... *stopping early crypto disks... unmount: /run/lock: not mounted unmount: /run/shm: not mounted

    Read the article

  • Restoring MSDB

    - by David-Betteridge
    We recently performed a disaster recovery exercise which included the restoration of the MSDB database onto our DR server.  I did a quick google to see if there were any special considerations and found the following MS article.  Considerations for Restoring the model and msdb Databases (http://msdn.microsoft.com/en-us/library/ms190749(v=sql.105).aspx).   It said both the original and replacement servers must be on the same version,  I double-checked and in my case they are both SQL Server 2008 R2 SP1 (10.50.2500).. So I went ahead and stopped SQL Server agent, restored the database and restarted the agent.  Checked the jobs and they were all there, everything looked great, and was until the server was rebooted a few days later.Then the syspolicy_purge_history job started failing on the 3rd step with the error message “Unable to start execution of step 3 (reason: The PowerShell subsystem failed to load [see the SQLAGENT.OUT file for details]; The job has been suspended). The step failed.”   A bit more googling pointed me to the msdb.dbo.syssubsystems table SELECT * FROM msdb.dbo.syssubsystems WHERE start_entry_point ='PowerShellStart'   And in particular the value for the subsystem_dll. It still had the path to the SQLPOWERSHELLSS.DLL but on the old server. The DR instance has a different name to the live instance and so the paths are different.   This was quickly fixed with the following SQL Use msdb; GO sp_configure 'allow updates', 1 ; RECONFIGURE WITH OVERRIDE ; GO UPDATE msdb.dbo.syssubsystems SET subsystem_dll='C:\Program Files\Microsoft SQL Server\MSSQL10_50.DR\MSSQL\binn\SQLPOWERSHELLSS.DLL' WHERE start_entry_point ='PowerShellStart'; GO sp_configure 'allow updates', 0; RECONFIGURE WITH OVERRIDE ; GO Stopped and started SQL Server agent and now the job completes.   I then wondered if anything else might be broken, SELECT subsystem_dll FROM msdb.dbo.syssubsystems Shows a further 10 wrong paths – fortunately for parts of SQL (replication, SSIS etc) we aren’t using! Lessons Learnt 1.       DR exercises are a good thing! 2.       Keep the Live and DR environments as similar as possible.    

    Read the article

  • My Automated NuGet Workflow

    - by Wes McClure
    When we develop libraries (whether internal or public), it helps to have a rapid ability to make changes and test them in a consuming application. Building Setup the library with automatic versioning and a nuspec Setup library assembly version to auto increment build and revision AssemblyInfo –> [assembly: AssemblyVersion("1.0.*")] This autoincrements build and revision based on time of build Major & Minor Major should be changed when you have breaking changes Minor should be changed once you have a solid new release During development I don’t increment these Create a nuspec, version this with the code nuspec - set version to <version>$version$</version> This uses the assembly’s version, which is auto-incrementing Make changes to code Run automated build (ruby/rake) run “rake nuget” nuget task builds nuget package and copies it to a local nuget feed I use an environment variable to point at this so I can change it on a machine level! The nuget command below assumes a nuspec is checked in called Library.nuspec next to the csproj file $projectSolution = 'src\\Library.sln' $nugetFeedPath = ENV["NuGetDevFeed"] msbuild :build => [:clean] do |msb| msb.properties :configuration => :Release msb.targets :Build msb.solution = $projectSolution end task :nuget => [:build] do sh "nuget pack src\\Library\\Library.csproj /OutputDirectory " + $nugetFeedPath end Setup the local nuget feed as a nuget package source (this is only required once per machine) Go to the consuming project Update the package Update-Package Library or Install-Package TLDR change library code run “rake nuget” run “Update-Package library” in the consuming application build/test! If you manually execute any of this process, especially copying files, you will find it a burden to develop the library and will find yourself dreading it, and even worse, making changes downstream instead of updating the shared library for everyone’s sake. Publishing Once you have a set of changes that you want to release, consider versioning and possibly increment the minor version if needed. Pick the package out of your local feed, and copy it to a public / shared feed! I have a script to do this where I can drop the package on a batch file Replace apikey with your nuget feed's apikey Take out the confirm(s) if you don't want them @ECHO off echo Upload %1? set /P anykey="Hit enter to continue " nuget push %1 apikey set /P anykey="Done " Note: helps to prune all the unnecessary versions during testing from your local feed once you are done and ready to publish TLDR consider version number run command to copy to public feed

    Read the article

  • Express your personality and potential @ Oracle

    - by jessica.ebbelaar(at)oracle.com
    Ciao, my name is Michel and I am a 24 year old guy from Forlì, Italy, working as a Business Intelligence Business Development Consultant in Rome. After I completed the Bachelor's Degree in Business Administration at Bologna University, I took a Multiple Master of Science in International Management organized by three European Universities: Bologna University (IT), ICN Business School of Nancy (FR) and Uppsala University (SE).I therefore had the chance to travel a lot and, most important, to study and meet hundreds of people from all over the world. This experience enhanced the passion I foster for international environments, different cultures and countries; not to mention the learning of foreign languages. Working for such a structured multinational as Oracle totally reflects my desire to be surrounded by a multicultural and international atmosphere, having the opportunity to grow from the personal point of view and to endlessly boost my career path. Demand Generation My department is responsible for demand generation activities. That implies, for instance, the implementation of various strategies aimed to feed the pipeline for Business Intelligence products in the Italian market. Organization of marketing campaigns, events, providing ideas or contacts to the sales force is just a few examples of our work. I like to define the role of the business development as something that translates the marketing insights into tools to increase the sales, accounting the differences amongst countries, companies and industries. Furthermore, it is an important feature to collaborate with the EMEA team to share knowledge and best practices. My initial lack of an IT background has been constantly covered by the managers and my personal mentor. The thing I appreciated most is indeed the fact I always feel to be a growing potential, becoming essential day after day. I am surprised by the trust and confidence people have on me and how they proudly encourage my personal initiative and always spur me to contribute. Career Ambitions If your ambitions are to work within an international but extremely people focused environment, to contribute to the growth of one of the most successful companies in the world, to deal with a fast-paced industry and highly competitive market, to have the chance to fully express your personality and potential and to satisfy your career ambitions over the years, then Oracle is right for YOU. Looking forward to having YOU aboard! Do you want to find out more about the open roles within Oracle? Follow us on http://campus.oracle.com.

    Read the article

  • Need help fixing DPKG errors after update from 12.04 to 12.10

    - by James Wulfe
    So I was doing fine then i upgraded my system to 12.10 and now i cant get my system to update all of its packages properly. no matter what i do, what is happening here and how do i fix this. if i would have thought 12.10 would be this much of a hassle i would have never upgraded..... here is a sampling of the code that returns from "apt-get -f install" It should also be noted that it is just these 6 packages only. no other packages have given me this kind of trouble. well i should say as of now. It was just 5, but them i got an update for unity, and now unity-common is added to the trouble makers. which prevents me from further upgrading the actual unity package as this package is a dependancy. Preparing to replace usb-modeswitch-data 20120120-0ubuntu1 (using .../usb-modeswitch-data_20120815-1_all.deb) ... /var/lib/dpkg/info/usb-modeswitch-data.prerm: 4: /var/lib/dpkg/info/usb-modeswitch-data.prerm: dpkg-maintscript-helper: Input/output error dpkg: warning: subprocess old pre-removal script returned error exit status 2 dpkg: trying script from the new package instead ... /var/lib/dpkg/tmp.ci/prerm: 4: /var/lib/dpkg/tmp.ci/prerm: dpkg-maintscript-helper: Input/output error dpkg: error processing /var/cache/apt/archives/usb-modeswitch-data_20120815-1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 2 /var/lib/dpkg/info/usb-modeswitch-data.postinst: 7: /var/lib/dpkg/info/usb-modeswitch-data.postinst: dpkg-maintscript-helper: Input/output error dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/network-manager_0.9.6.0-0ubuntu7_i386.deb /var/cache/apt/archives/pcmciautils_018-8_i386.deb /var/cache/apt/archives/unity-common_6.10.0-0ubuntu2_all.deb /var/cache/apt/archives/whoopsie_0.2.7_i386.deb /var/cache/apt/archives/usb-modeswitch_1.2.3+repack0-1ubuntu3_i386.deb /var/cache/apt/archives/usb-modeswitch-data_20120815-1_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I would also like to note i have cleaned apt cashe both through the terminal and manualy, i have tried installing them manually through dpkg from both the /var/cache/apt/archives/ location and from my own manually downloaded .deb files. i have tried using dpkg-reconfigure and i have used bleachbit to clean my system. I have also tested both my HDD and memory and found no significant errors to lead to the input/output errors. Quite frankly i am just out of options and have grown tired of trying to google a solution to this mess but still do not wish to pursue backing up settings and reinstalling the system. Any help would be appreciated. I am only interested in answers, please leave your feeling towards grammar, punctuation, and bias towards how a "post should look" at the door. If you dont have something to contribute towards solving my problem then you are just doing nothing but contributing to it. Thank you.

    Read the article

  • Should Exterrnal USB hard drive auto mount Ubuntu 12.04.01 LTS

    - by Chris Good
    I want to have external USB hard drives automatically mounted when plugged in. I have 2 drives exactly the same except for volume label. They both have the same UUID. I want to be easily able to swap them as I'm using them for backups and want to keep 1 at home for off site backup. I've set up the /etc/fstab so they should mount at different places based on their volume label: /etc/fstab : LABEL=Passport1 /media/Passport1 ntfs defaults,windows_names,locale=en_US.utf8 0 0 LABEL=Passport2 /media/Passport2 ntfs defaults,windows_names,locale=en_US.utf8 0 0 blkid shows : ... /dev/sdc1: LABEL="Passport2" UUID="4E1AEA7B1AEA6007" TYPE="ntfs" /dev/sdd1: LABEL="Passport1" UUID="4E1AEA7B1AEA6007" TYPE="ntfs" They both mount automatically during reboot but do not mount when just plugged in to a running system. I've read lots of stuff about this, much of it is old so I'm not sure if it applies. I've read some stuff that says the mounts should happen automatically when plugged in, and lots of other stuff that says you have to install other software to make this happen, although much of it just seems to set up the fstab. What's the real story? Here is /var/log/syslog when drive is plugged in: Dec 14 11:22:58 ausyvutims1 kernel: [66221.300196] usb 1-1: new high-speed USB device number 6 using ehci_hcd Dec 14 11:22:58 ausyvutims1 mtp-probe: checking bus 1, device 6: "/sys/devices/pci0000:00/0000:00:11.0/0000:02:03.0/usb1/1-1" Dec 14 11:22:58 ausyvutims1 mtp-probe: bus: 1, device: 6 was not an MTP device Dec 14 11:22:58 ausyvutims1 kernel: [66221.656020] scsi7 : usb-storage 1-1:1.0 Dec 14 11:22:59 ausyvutims1 kernel: [66222.661534] scsi 7:0:0:0: Direct-Access WD My Passport 0748 1016 PQ: 0 ANSI: 6 Dec 14 11:22:59 ausyvutims1 kernel: [66222.666466] scsi 7:0:0:1: Enclosure WD SES Device 1016 PQ: 0 ANSI: 6 Dec 14 11:22:59 ausyvutims1 kernel: [66222.667739] sd 7:0:0:0: Attached scsi generic sg3 type 0 Dec 14 11:22:59 ausyvutims1 kernel: [66222.667913] ses 7:0:0:1: Attached Enclosure device Dec 14 11:22:59 ausyvutims1 kernel: [66222.668047] ses 7:0:0:1: Attached scsi generic sg4 type 13 Dec 14 11:22:59 ausyvutims1 kernel: [66222.678473] sd 7:0:0:0: [sdc] 1953458176 512-byte logical blocks: (1.00 TB/931 GiB) Dec 14 11:22:59 ausyvutims1 kernel: [66222.687700] sd 7:0:0:0: [sdc] Write Protect is off Dec 14 11:22:59 ausyvutims1 kernel: [66222.687705] sd 7:0:0:0: [sdc] Mode Sense: 47 00 10 08 Dec 14 11:22:59 ausyvutims1 kernel: [66222.701076] sd 7:0:0:0: [sdc] No Caching mode page present Dec 14 11:22:59 ausyvutims1 kernel: [66222.701081] sd 7:0:0:0: [sdc] Assuming drive cache: write through Dec 14 11:22:59 ausyvutims1 kernel: [66222.738062] sd 7:0:0:0: [sdc] No Caching mode page present Dec 14 11:22:59 ausyvutims1 kernel: [66222.738068] sd 7:0:0:0: [sdc] Assuming drive cache: write through Dec 14 11:22:59 ausyvutims1 kernel: [66222.754558] sdc: sdc1 Dec 14 11:22:59 ausyvutims1 kernel: [66222.792006] sd 7:0:0:0: [sdc] No Caching mode page present Dec 14 11:22:59 ausyvutims1 kernel: [66222.792012] sd 7:0:0:0: [sdc] Assuming drive cache: write through Dec 14 11:22:59 ausyvutims1 kernel: [66222.792016] sd 7:0:0:0: [sdc] Attached SCSI disk Dec 14 11:22:59 ausyvutims1 ata_id[16971]: HDIO_GET_IDENTITY failed for '/dev/sdc': Invalid argument Thanks for any help offered

    Read the article

  • New Version Demonstration VM BIC2g 2013-04 Partner Edition

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 This Oracle Business Intelligence Linux VM virtual appliance (“BIC2g”) was developed to support Oracle OBI & BI-Apps sales and Oracle partners in product demonstrations, training activities and POC activities. It is available on ftp.oracle.com (see the deployment guide and “BIC2g 2013-04 Partner Edition Readme” pdf from the link below) and is available for OPN member partners. This BIC2g image is based on OBIEE v. 11.1.1.7. with Essbase and Essbase Studio Server started when starting BI. It also contains: Updated BI-Apps Cross Functional Demo (date advanced from 2011 to 2013), including DAC 11.1.1.6.4, Informatica 9.0.1 and is configured for a load against EBS R12. Both the 7.9.6.3 rpd/catalog and the 7.9.6.4 rpd/catalog versions of BI-Apps are provided. Updated integrated Essbase - BI Apps - EBS Demo (date advanced from 2009 to 2013. Re-configured BI Apps Data Sets to remove VPD (simplification) and greatly improved performance. Note that this image is identical to Oracle’s internal BI demonstration image, except that Endeca has been removed pending Endeca latest version availability on OTN. Once it is available on OTN we will provide a replacement that contains Endeca. Some of the screen shots in the “Readme”.pdf shows Endeca, but it is not on this (2013-04) image. The FTP access details and password are shown at the bottom of the page @ BI Solutions Engineering Partner Portal. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • Becoming the well-integrated content company (and combating AIUTLVFS)

    - by Lance Shaw
    Every single day, each of us create more and more content. Sometimes it is brand new material and many times it is iterations of existing content, but no one would argue that information and content growth is growing at an almost exponential rate. With all this content being created and stored, a number of problems naturally arise. One of the most common issues that users run into is "Am I Using The Latest Version of this File Syndrome", or AIUTLVFS. This insidious syndrome is all too common and results in ineffective, poor or downright wrong business decisions being made.  When content or files are unavailable or incorrect within the scope of key business processes, the chance for erroneous and costly business decisions is magnified even further. For many companies, the ideal scenario is to be able to connect multiple business systems, both old and new, into one common content repository.  Not only does this reduce content duplication, it also helps guarantee that everyone in various departments is working off the proverbial "same page".  Sounds simple - but for many organizations, the proliferation of file shares, SharePoint sites, and other storage silos of content keep the dream of a more efficient business a distant one. We've created some online assets to help you in your evaluation and eventual improvement of your current content management and delivery systems. Take a few minutes to check out our Online Assessment Tool.  It's quick, easy and just might provide you with insights into how you can improve your current content ecosystem. While you are there, check out our new Infographic that outlines common issues faced by companies today. Feel free to save our informative Infographic PDF and share it with business colleagues and your management to help them understand the business costs and impact of inaction. Together we can stop AIUTLVFS in its tracks and run our businesses more effectively than ever. Additionally, we hope you will take a few minutes to visit our new and informative webpages dedicated to the value of a well connected, fully integrated content management system. It's a great place to learn more about how integrating WebCenter Content into your infrastructure can lower your operational costs while boosting process and worker efficiency.

    Read the article

  • What are some reasonable stylistic limits on type inference?

    - by Jon Purdy
    C++0x adds pretty darn comprehensive type inference support. I'm sorely tempted to use it everywhere possible to avoid undue repetition, but I'm wondering if removing explicit type information all over the place is such a good idea. Consider this rather contrived example: Foo.h: #include <set> class Foo { private: static std::set<Foo*> instances; public: Foo(); ~Foo(); // What does it return? Who cares! Just forward it! static decltype(instances.begin()) begin() { return instances.begin(); } static decltype(instances.end()) end() { return instances.end(); } }; Foo.cpp: #include <Foo.h> #include <Bar.h> // The type need only be specified in one location! // But I do have to open the header to find out what it actually is. decltype(Foo::instances) Foo::instances; Foo() { // What is the type of x? auto x = Bar::get_something(); // What does do_something() return? auto y = x.do_something(*this); // Well, it's convertible to bool somehow... if (!y) throw "a constant, old school"; instances.insert(this); } ~Foo() { instances.erase(this); } Would you say this is reasonable, or is it completely ridiculous? After all, especially if you're used to developing in a dynamic language, you don't really need to care all that much about the types of things, and can trust that the compiler will catch any egregious abuses of the type system. But for those of you that rely on editor support for method signatures, you're out of luck, so using this style in a library interface is probably really bad practice. I find that writing things with all possible types implicit actually makes my code a lot easier for me to follow, because it removes nearly all of the usual clutter of C++. Your mileage may, of course, vary, and that's what I'm interested in hearing about. What are the specific advantages and disadvantages to radical use of type inference?

    Read the article

  • best way to "introduce" OOP/OOD to team of experienced C++ engineers

    - by DXM
    I am looking for an efficient way, that also doesn't come off as an insult, to introduce OOP concepts to existing team members? My teammates are not new to OO languages. We've been doing C++/C# for a long time so technology itself is familiar. However, I look around and without major infusion of effort (mostly in the form of code reviews), it seems what we are producing is C code that happens to be inside classes. There's almost no use of single responsibility principle, abstractions or attempts to minimize coupling, just to name a few. I've seen classes that don't have a constructor but get memset to 0 every time they are instantiated. But every time I bring up OOP, everyone always nods and makes it seem like they know exactly what I'm talking about. Knowing the concepts is good, but we (some more than others) seem to have very hard time applying them when it comes to delivering actual work. Code reviews have been very helpful but the problem with code reviews is that they only occur after the fact so to some it seems we end up rewriting (it's mostly refactoring, but still takes lots of time) code that was just written. Also code reviews only give feedback to an individual engineer, not the entire team. I am toying with the idea of doing a presentation (or a series) and try to bring up OOP again along with some examples of existing code that could've been written better and could be refactored. I could use some really old projects that no one owns anymore so at least that part shouldn't be a sensitive issue. However, will this work? As I said most people have done C++ for a long time so my guess is that a) they'll sit there thinking why I'm telling them stuff they already know or b) they might actually take it as an insult because I'm telling them they don't know how to do the job they've been doing for years if not decades. Is there another approach which would reach broader audience than a code review would, but at the same time wouldn't feel like a punishment lecture? I'm not a fresh kid out of college who has utopian ideals of perfectly designed code and I don't expect that from anyone. The reason I'm writing this is because I just did a review of a person who actually had decent high-level design on paper. However if you picture classes: A - B - C - D, in the code B, C and D all implement almost the same public interface and B/C have one liner functions so that top-most class A is doing absolutely all the work (down to memory management, string parsing, setup negotiations...) primarily in 4 mongo methods and, for all intents and purposes, calls almost directly into D. Update: I'm a tech lead(6 months in this role) and do have full support of the group manager. We are working on a very mature product and maintenance costs are definitely letting themselves be known.

    Read the article

  • What does the Spring framework do? Should I use it? Why or why not?

    - by sangfroid
    So, I'm starting a brand-new project in Java, and am considering using Spring. Why am I considering Spring? Because lots of people tell me I should use Spring! Seriously, any time I've tried to get people to explain what exactly Spring is or what it does, they can never give me a straight answer. I've checked the intros on the SpringSource site, and they're either really complicated or really tutorial-focused, and none of them give me a good idea of why I should be using it, or how it will make my life easier. Sometimes people throw around the term "dependency injection", which just confuses me even more, because I think I have a different understanding of what that term means. Anyway, here's a little about my background and my app : Been developing in Java for a while, doing back-end web development. Yes, I do a ton of unit testing. To facilitate this, I typically make (at least) two versions of a method : one that uses instance variables, and one that only uses variables that are passed in to the method. The one that uses instance variables calls the other one, supplying the instance variables. When it comes time to unit test, I use Mockito to mock up the objects and then make calls to the method that doesn't use instance variables. This is what I've always understood "dependency injection" to be. My app is pretty simple, from a CS perspective. Small project, 1-2 developers to start with. Mostly CRUD-type operations with a a bunch of search thrown in. Basically a bunch of RESTful web services, plus a web front-end and then eventually some mobile clients. I'm thinking of doing the front-end in straight HTML/CSS/JS/JQuery, so no real plans to use JSP. Using Hibernate as an ORM, and Jersey to implement the webservices. I've already started coding, and am really eager to get a demo out there that I can shop around and see if anyone wants to invest. So obviously time is of the essence. I understand Spring has quite the learning curve, plus it looks like it necessitates a whole bunch of XML configuration, which I typically try to avoid like the plague. But if it can make my life easier and (especially) if make it can make development and testing faster, I'm willing to bite the bullet and learn Spring. So please. Educate me. Should I use Spring? Why or why not?

    Read the article

  • How to suggest using an ORM instead of stored procedures?

    - by Wayne M
    I work at a company that only uses stored procedures for all data access, which makes it very annoying to keep our local databases in sync as every commit we have to run new procs. I have used some basic ORMs in the past and I find the experience much better and cleaner. I'd like to suggest to the development manager and rest of the team that we look into using an ORM Of some kind for future development (the rest of the team are only familiar with stored procedures and have never used anything else). The current architecture is .NET 3.5 written like .NET 1.1, with "god classes" that use a strange implementation of ActiveRecord and return untyped DataSets which are looped over in code-behind files - the classes work something like this: class Foo { public bool LoadFoo() { bool blnResult = false; if (this.FooID == 0) { throw new Exception("FooID must be set before calling this method."); } DataSet ds = // ... call to Sproc if (ds.Tables[0].Rows.Count > 0) { foo.FooName = ds.Tables[0].Rows[0]["FooName"].ToString(); // other properties set blnResult = true; } return blnResult; } } // Consumer Foo foo = new Foo(); foo.FooID = 1234; foo.LoadFoo(); // do stuff with foo... There is pretty much no application of any design patterns. There are no tests whatsoever (nobody else knows how to write unit tests, and testing is done through manually loading up the website and poking around). Looking through our database we have: 199 tables, 13 views, a whopping 926 stored procedures and 93 functions. About 30 or so tables are used for batch jobs or external things, the remainder are used in our core application. Is it even worth pursuing a different approach in this scenario? I'm talking about moving forward only since we aren't allowed to refactor the existing code since "it works" so we cannot change the existing classes to use an ORM, but I don't know how often we add brand new modules instead of adding to/fixing current modules so I'm not sure if an ORM is the right approach (too much invested in stored procedures and DataSets). If it is the right choice, how should I present the case for using one? Off the top of my head the only benefits I can think of is having cleaner code (although it might not be, since the current architecture isn't built with ORMs in mind so we would basically be jury-rigging ORMs on to future modules but the old ones would still be using the DataSets) and less hassle to have to remember what procedure scripts have been run and which need to be run, etc. but that's it, and I don't know how compelling an argument that would be. Maintainability is another concern but one that nobody except me seems to be concerned about.

    Read the article

  • Unable to connect to mail server via IMAP and roundcube

    - by mrhatter
    I am having trouble getting the final parts of my mail server up and working. I followed this tutorial to get everything set up on the mail server side. I have installed roundcube for webmail and configured it but it is saying "error connecting, connection refused" when attempting to connect to it using IMAP. This is thorough the "test imap" section of its installer. Also it is giving me an error message about perissions for it's log and temp folders but that's not as important as acutally getting mail to work. I have also tried connecting to the mail server using thunderbird however it cannot establish a connection either and I know my login information is correct. I know that the databases are working correctly based on the roundcube installer telling me that they have been "successfully initialized". Here are my firewall rules -A INPUT -i lo -j ACCEPT -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -p tcp -m tcp --dport 25 -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT -A INPUT -p tcp -m tcp --dport 465 -j ACCEPT -A INPUT -p tcp -m tcp --dport 487 -j ACCEPT -A OUTPUT -p tcp -m tcp --dport 993 -j ACCEPT -A INPUT -j DROP Which I set up in iptables. I have modified them from what I used in this tutorial I'm not sure what to try next. Any help would be wonderful! I am using Ubuntu 14.04 server, apache 2.4.7, roundcube 1.0.1, and the latest versions of dovecot and postfix. The email databases are contained in mysql. I am running this on a VPS server. UPDATE: I have changed from iptables to using ufw. I have run the following commands to set up a basic firewall with ufw. ufw default deny ufw allow ssh ufw allow http ufw allow https ufw allow imap ufw allow imaps ufw allow smtp I then used telnet to check all of the mail ports. But Port 993 isnt working even though ufw says both 993 and 993/tcp are open. What am I missing?

    Read the article

  • Ubuntu hangs on booting up after a update

    - by alFReD NSH
    I've made a clean install yesterday, for the first time restarted, everything went good and then after I updated packages and copied my old home directory to replace the new one, when I restarted it hung when it was booting. I tried reinstalling again and doing the same thing, but again same thing happened. Here's what I see, before when the Ubuntu logo with the five dots is shown: Then after that, 3 or 4 of the dots will load and hangs there. If I press arrow up before that, this will be shown I started my laptop again today(the pictures are for the day before) and after that, boot up with live CD and got the logs. dmesg: http://pastebin.com/aVxV7BQF syslog: http://pastebin.com/4E2BrRUK And some info: alfred@alFitop:~$ uname -a Linux alFitop 3.2.0-24-generic #39-Ubuntu SMP Mon May 21 16:52:17 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux lshw: http://pastebin.com/AZbKJmsT sources.list : http://pastebin.com/2HazmuyV My problem is a bit similar to here: http://ubuntuforums.org/showthread.php?t=1918271 Though I didn't change my x.org config. Only changed home directory and updated packages. I've tried memtest and fschk, both passed. In the recovery mode boot option, I've also realized that same things happen in failsafe graphical mode. But when I go into the network mode, I can boot up my system, but of course same the graphics are just basic. Adding blacklist intel_ips to /etc/modprobe.d/blacklist.conf solves the first message, but still I get the broken pipe and CPU stack traces. The current kernel version is 3.2.0-25, I've tried booting up in the 3.2.0-23(the one the installer came with, but same results.) Also uninstalled apparmor, didn't help. I've installed Ubuntu again, this time without copying the home directory, also same result. --- UPDATE --- This problem was solved before with removing backports, but its back again! I've updated my laptop last night and the problem came back. It's definitely one of these packages. My /var/log/apt/term.log and /var/log/apt/history.log. I'm almost having the same situation. --- UPDATE --- I realized this also have happened on times that I have updated(haven't restarted after it) and my computer power has been cut off and its shutdown due to lack of power. And I realized if I just do as I answered but not in somewhere without GUI(networking mode has the GUI) it wouldn't work.

    Read the article

  • Client-Server connection response timeout issues

    - by Srikar
    User creates a folder in client and in the client-side code I hit an API to the server to make this persistent for that user. But in some cases, my server is so busy that the request timesout. The server has executed my request but timedout before sending a response back to client. The timeout set is 10 seconds in client. At this point the client thinks that server has not executed its request (of creating a folder) and ends up sending it again. Now I have 2 folders on the server but the user has created only 1 folder in the client. How to prevent this? One of the ways to solve this is to use a unique ID with each new request. So the ID acts as a distinguisher between old and new requests from client. But this leads to storing these IDs on my server and do a lookup for each API call which I want to avoid. Other way is to increase the timeout duration. But I dont want to change this from 10 seconds. Something tells me that there are better solutions. I have posted this question in stackoverflow but I think its better suited here. UPDATE: I will make my problem even more explicit. The client is a webbrowser and the server is running nginx+django+mysql (standard stack). The user creates a folder in webbrowser. As a result I need to hit a server API. The API call responds back, thereby client knows API call was success. This is normal scenario. Sometimes though, server successfully completes the API request but the client-side (webbrowser) connection timesout before server can respond back. The client has no clue at this point. The user thinks the request was a fail & clicks again. This time it was a success but when the UI refreshes he sees 2 folders. I want to remedy this situation.

    Read the article

  • how do I set quad buffering with jogl 2.0

    - by tony danza
    I'm trying to create a 3d renderer for stereo vision with quad buffering with Processing/Java. The hardware I'm using is ready for this so that's not the problem. I had a stereo.jar library in jogl 1.0 working for Processing 1.5, but now I have to use Processing 2.0 and jogl 2.0 therefore I have to adapt the library. Some things are changed in the source code of Jogl and Processing and I'm having a hard time trying to figure out how to tell Processing I want to use quad buffering. Here's the previous code: public class Theatre extends PGraphicsOpenGL{ protected void allocate() { if (context == null) { // If OpenGL 2X or 4X smoothing is enabled, setup caps object for them GLCapabilities capabilities = new GLCapabilities(); // Starting in release 0158, OpenGL smoothing is always enabled if (!hints[DISABLE_OPENGL_2X_SMOOTH]) { capabilities.setSampleBuffers(true); capabilities.setNumSamples(2); } else if (hints[ENABLE_OPENGL_4X_SMOOTH]) { capabilities.setSampleBuffers(true); capabilities.setNumSamples(4); } capabilities.setStereo(true); // get a rendering surface and a context for this canvas GLDrawableFactory factory = GLDrawableFactory.getFactory(); drawable = factory.getGLDrawable(parent, capabilities, null); context = drawable.createContext(null); // need to get proper opengl context since will be needed below gl = context.getGL(); // Flag defaults to be reset on the next trip into beginDraw(). settingsInited = false; } else { // The following three lines are a fix for Bug #1176 // http://dev.processing.org/bugs/show_bug.cgi?id=1176 context.destroy(); context = drawable.createContext(null); gl = context.getGL(); reapplySettings(); } } } This was the renderer of the old library. In order to use it, I needed to do size(100, 100, "stereo.Theatre"). Now I'm trying to do the stereo directly in my Processing sketch. Here's what I'm trying: PGraphicsOpenGL pg = ((PGraphicsOpenGL)g); pgl = pg.beginPGL(); gl = pgl.gl; glu = pg.pgl.glu; gl2 = pgl.gl.getGL2(); GLProfile profile = GLProfile.get(GLProfile.GL2); GLCapabilities capabilities = new GLCapabilities(profile); capabilities.setSampleBuffers(true); capabilities.setNumSamples(4); capabilities.setStereo(true); GLDrawableFactory factory = GLDrawableFactory.getFactory(profile); If I go on, I should do something like this: drawable = factory.getGLDrawable(parent, capabilities, null); but drawable isn't a field anymore and I can't find a way to do it. How do I set quad buffering? If I try this: gl2.glDrawBuffer(GL.GL_BACK_RIGHT); it obviously doesn't work :/ Thanks.

    Read the article

  • I'd like to switch from 32-bit to 64-bit within same version

    - by Marty Fried
    I have a 32-bit installation of 11.10 on my 64-bit (4 GB) home AMD system. I have recently read up a bit on 64-bit version, and it seems that it would be a marginally better choice now for me. I have read about several methods to help reinstall all the various apps, using either dpkg's get-selections/set-selections and dselect in various ways, or using synaptic's save/get markings. The problem here is that I've read several variations, and I'm not sure which is best. I have enough disk space to do this with a brand new partition, so I'm not too worried about destroying anything, but I don't really want to make it my life's work, hence my appeal for expert tips. Since it's the same version, would it be safe to copy configuration files from the 32-bit system? I'd guess my home directory and /etc might be enough, and would save at least most of the time to reconfigure. But are there difference in configuration files in either of these directories for 32 vs 64 bits that might cause problems? After reinstalling to 64-bit, I can then continue along the 64 bit path for upgrades, but I thought it would be easier to switch the same version, than to try to reinstall apps and upgrade at the same time. Some methods I've seen suggested, among others: A. From Ubuntu forums On your old system (assuming it is still working), start up Synaptic and go: File->Save Markings and choose a file name along with a location (like a USB drive) that you can use when you have installed your new system). You need to check on the bottom: "Save full state, not only changes" This file contains a list of all your currently installed packages, and when you have installed and booted up your new system (and configured your repositories to the best for your location - as we all do, don't we?) then start up Synaptic and go: File-Read Markings and point it at your saved file, and after that has completed then select Apply to kick off the download & installation of all of those packages you had installed previously! B. From the same discussion: According to section 6.4.9 of the Debian Reference Manual, the following will save both the list of packages installed and their debconf configuration: # dpkg --get-selections "*" >myselections # or use \* # debconf-get-selections > debconfsel.txt and the following will reinstall and reconfigure them: # dselect update # debconf-set-selections < debconfsel.txt # dpkg --set-selections <myselections # apt-get -u dselect-upgrade # or dselect install C. A variation on the above I've seen a lot, this from stackoverflow: dpkg --get-selections > package_list then on the new install: cat package_list | sudo dpkg --set-selections && sudo apt-get dselect-upgrade I don't really understand B, or why it's slightly different than many others.

    Read the article

  • NDepend 4.0 Released

    - by Anthony Trudeau
    Last week version 4.0 of NDepend was released.  NDepend is a Visual Studio add-in designed for intense code analysis with the goal of high quality code.  A month ago I wrapped up my evaluation of the previous version of NDepend. The new version contains many minor changes, several bug fixes, and adds about 50 new code rules.  The version also adds support for Visual Studio 11, .NET Framework 4.5, and SilverLight 5.0.  But, the biggest change was the shift from CQL to CQLinq. Introducing CQLinq The latest version replaces the CQL rules language with CQLinq (CQL is still an option although the editor is buried).  As you might guess CQLinq is a flavor of Linq designed specifically for the code rules. The best way to illustrate the differences is with an example.  I used the following CQL example in Part 3 of my review: WARN IF Count > 0 IN SELECT TYPES WHERE IsInterface AND !NameLike “I” This same query looks like this when implemented in CQLinq: warnif count > 0 from t in Types where t.IsInterface == true && !t.NameLike(“I”) select t I like the syntax and it is a natural fit, but I found writing the queries frustrating in the Queries and Rules Edit window.  The Queries and Rules Edit window replaces the CQL Query Edit window.  The new editor has the same style of Intellisense as the previous editor.  However, it has a few annoyances.  The error indicator is a red block.  It has the tendency of obscuring your cursor.  Additionally, writing CQLing queries is like writing plain old Linq queries, so the fact that the editor uses Enter to select from Intellisense instead of Tab is jarring.  These issues can be an obstacle to writing queries quickly.CQLinq makes it possible to write rules that weren't possible before.  Additionally, a JustMyCode domain is now possible making it easy to eliminate generated code from the analysis.Should you Buy? I recommend NDepend overall.  It has some rough points for me that I have detailed in my earlier evaluation (starting here).  But, it’s definitely worth the money.  The bigger question is: should I pay for the upgrade to 4.0?  At this point I’m on the fence, but I would go for it if you need support for Visual Studio 2011, .NET Framework 4.5, or Silverlight 5.0; or if you need one of the many rules that weren't possible before CQLinq. Disclaimer: Patrick Smacchia contacted me about reviewing NDepend. I received a free license in return for sharing my experiences and talking about the capabilities of the add-in on this site. There is no expectation of a positive review elicited from the author of NDepend. Resources: NDepend Release Notes

    Read the article

  • Compiling for T4

    - by Darryl Gove
    I've recently had quite a few queries about compiling for T4 based systems. So it's probably a good time to review what I consider to be the best practices. Always use the latest compiler. Being in the compiler team, this is bound to be something I'd recommend But the serious points are that (a) Every release the tools get better and better, so you are going to be much more effective using the latest release (b) Every release we improve the generated code, so you will see things get better (c) Old releases cannot know about new hardware. Always use optimisation. You should use at least -O to get some amount of optimisation. -xO4 is typically even better as this will add within-file inlining. Always generate debug information, using -g. This allows the tools to attribute information to lines of source. This is particularly important when profiling an application. The default target of -xtarget=generic is often sufficient. This setting is designed to produce a binary that runs well across all supported platforms. If the binary is going to be deployed on only a subset of architectures, then it is possible to produce a binary that only uses the instructions supported on these architectures, which may lead to some performance gains. I've previously discussed which chips support which architectures, and I'd recommend that you take a look at the chart that goes with the discussion. Crossfile optimisation (-xipo) can be very useful - particularly when the hot source code is distributed across multiple source files. If you're allowed to have something as geeky as favourite compiler optimisations, then this is mine! Profile feedback (-xprofile=[collect: | use:]) will help the compiler make the best code layout decisions, and is particularly effective with crossfile optimisations. But what makes this optimisation really useful is that codes that are dominated by branch instructions don't typically improve much with "traditional" compiler optimisation, but often do respond well to being built with profile feedback. The macro flag -fast aims to provide a one-stop "give me a fast application" flag. This usually gives a best performing binary, but with a few caveats. It assumes the build platform is also the deployment platform, it enables floating point optimisations, and it makes some relatively weak assumptions about pointer aliasing. It's worth investigating. SPARC64 processor, T3, and T4 implement floating point multiply accumulate instructions. These can substantially improve floating point performance. To generate them the compiler needs the flag -fma=fused and also needs an architecture that supports the instruction (at least -xarch=sparcfmaf). The most critical advise is that anyone doing performance work should profile their application. I cannot overstate how important it is to look at where the time is going in order to determine what can be done to improve it. I also presented at Oracle OpenWorld on this topic, so it might be helpful to review those slides.

    Read the article

< Previous Page | 515 516 517 518 519 520 521 522 523 524 525 526  | Next Page >