Search Results

Search found 25651 results on 1027 pages for 'shell script'.

Page 572/1027 | < Previous Page | 568 569 570 571 572 573 574 575 576 577 578 579  | Next Page >

  • What packages do I need to compile .tex documents using XeLaTeX?

    - by maria
    Hi I'm aware of the existence of similar threads on this forum. But any of replies mach to my problem. I'm using Ubuntu 10.4 and I hadn't problems with fonts till I've decided to use XeLaTeX instead of LaTeX (cf http://tex.stackexchange.com/questions/12347/typesetting-a-document-using-arabic-script/12358#12358). The problem is that I'm not able to compile any .tex document using XeLaTeX, as well as properly display XeLaTeX documentation. As I've learn thanks to mentioned thread, XeLaTeX uses the fonts availables in general in the system. I was trying yo read fontspec documentation, but it opens in pdf with a lot of white gaps and terminal output (quite long) consist mostly of errors. This are just few lines of it: Error: Missing language pack for 'Adobe-Japan1' mapping Error: Unknown font tag 'F5.1' Error (24124): No font in show Error: Unknown font tag 'F5.1' I was trying to compile simple XeLaTeX file: \documentclass{article} \usepackage{fontspec} \setmainfont{Linux Libertine O} \begin{document} Hello World! \end{document} without succes. This is terminal output of compilation: This is XeTeX, Version 3.1415926-2.2-0.9995.2 (TeX Live 2009/Debian) restricted \write18 enabled. entering extended mode (./ex.tex LaTeX2e <2009/09/24> Babel <v3.8l> and hyphenation patterns for english, usenglishmax, dumylang, noh yphenation, polish, loaded. (/usr/share/texmf-texlive/tex/latex/base/article.cls Document Class: article 2007/10/19 v1.4h Standard LaTeX document class (/usr/share/texmf-texlive/tex/latex/base/size10.clo)) (/usr/share/texmf-texlive/tex/xelatex/fontspec/fontspec.sty (/usr/share/texmf-texlive/tex/generic/ifxetex/ifxetex.sty) (/usr/share/texmf-texlive/tex/latex/tools/calc.sty) (/usr/share/texmf-texlive/tex/latex/xkeyval/xkeyval.sty (/usr/share/texmf-texlive/tex/generic/xkeyval/xkeyval.tex (/usr/share/texmf-texlive/tex/generic/xkeyval/keyval.tex))) (/usr/share/texmf-texlive/tex/latex/base/fontenc.sty (/usr/share/texmf-texlive/tex/xelatex/euenc/eu1enc.def) (/usr/share/texmf-texlive/tex/xelatex/euenc/eu1lmr.fd)) fontspec.cfg loaded. (/usr/share/texmf-texlive/tex/xelatex/fontspec/fontspec.cfg))kpathsea: Invalid fontname `Linux Libertine O', contains ' ' ! Font \zf@basefont="Linux Libertine O" at 10.0pt not loadable: Metric (TFM) fi le or installed font not found. \zf@fontspec ...ntname \zf@suffix " at \f@size pt \unless \ifzf@icu \zf@set@... l.3 \setmainfont{Linux Libertine O} ? I can't find Linux Libertine O. Searching for otf- by aptitude gives as result: maria@maria-laptop:/etc/fonts$ aptitude search otf p emdebian-rootfs - emdebian root filesystem support p libotf-bin - A Library for handling OpenType Font - utilities p libotf-dev - A Library for handling OpenType Font - development i libotf0 - A Library for handling OpenType Font - runtime p libotf0-dbg - The libotf libraries and debugging symbols p libpam-dotfile - A PAM module which allows users to have more than one password p livecd-rootfs - construction script for the livecd rootfs p makebootfat - Utility to create a bootable FAT filesystem p otf-ipaexfont - Japanese OpenType font, IPAexFont (IPAexGothic/Mincho) p otf-ipaexfont-gothic - Japanese OpenType font, IPAexFont (IPAexGothic) p otf-ipaexfont-mincho - Japanese OpenType font, IPAexFont (IPAexMincho) p otf-ipafont - Japanese OpenType font set, IPAfont p otf-ipafont-gothic - Japanese OpenType font set, IPA Gothic font p otf-ipafont-mincho - Japanese OpenType font set, IPA Mincho font p otf-stix - the Scientific and Technical Information eXchange fonts p otf-thai-tlwg - Thai fonts in OpenType format p otf-yozvox-yozfont - Japanese proportional Handwriting OpenType font p otf2bdf - generate BDF bitmap fonts from OpenType outline fonts p robotfindskitten - Zen Simulation of robot finding kitten So font in question is not just uninstalled, but not available, if I'm not wrong. Does it mean that I lack some repositoires? I was trying also to apply solution from the thread How do I reinstall default fonts?, but the result is: maria@maria-laptop:~$ sudo apt-get install msttcorefonts [sudo] password for maria: Reading package lists... Done Building dependency tree Reading state information... Done Note, selecting ttf-mscorefonts-installer instead of msttcorefonts ttf-mscorefonts-installer is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. maria@maria-laptop:~$ It seems that is not a usual problem for use of XeLaTeX; nobody in the mentioned thread suggested instalation of anything else than TeX Live. Thanks in advance

    Read the article

  • How to list all my packages from command line which can show package name, license, source url, etc?

    - by YumYumYum
    How to get all the installed package list with there license, source url? Such as following only shows name of the package only. $ dpkg --get-selections acpi-support install acpid install adduser install adium-theme-ubuntu install aisleriot install alacarte install For example in Fedora/CentOS (RED HAT LINUX BRANCH), you can see that: $ yum info busybox Loaded plugins: auto-update-debuginfo, langpacks, presto, refresh-packagekit Available Packages Name : busybox Arch : i686 Epoch : 1 Version : 1.18.2 Release : 5.fc15 Size : 615 k Repo : updates Summary : Statically linked binary providing simplified versions of system commands URL : http://www.busybox.net License : GPLv2 Description : Busybox is a single binary which includes versions of a large number : of system commands, including a shell. This package can be very : useful for recovering from certain types of system failures, : particularly those involving broken shared libraries. Follow up: /var/lib/apt/lists$ ls extras.ubuntu.com_ubuntu_dists_natty_main_binary-amd64_Packages extras.ubuntu.com_ubuntu_dists_natty_main_source_Sources extras.ubuntu.com_ubuntu_dists_natty_Release extras.ubuntu.com_ubuntu_dists_natty_Release.gpg lock partial security.ubuntu.com_ubuntu_dists_natty-security_main_binary-amd64_Packages security.ubuntu.com_ubuntu_dists_natty-security_main_source_Sources security.ubuntu.com_ubuntu_dists_natty-security_multiverse_binary-amd64_Packages security.ubuntu.com_ubuntu_dists_natty-security_multiverse_source_Sources security.ubuntu.com_ubuntu_dists_natty-security_Release security.ubuntu.com_ubuntu_dists_natty-security_Release.gpg security.ubuntu.com_ubuntu_dists_natty-security_restricted_binary-amd64_Packages security.ubuntu.com_ubuntu_dists_natty-security_restricted_source_Sources security.ubuntu.com_ubuntu_dists_natty-security_universe_binary-amd64_Packages security.ubuntu.com_ubuntu_dists_natty-security_universe_source_Sources us.archive.ubuntu.com_ubuntu_dists_natty_main_binary-amd64_Packages us.archive.ubuntu.com_ubuntu_dists_natty_main_source_Sources us.archive.ubuntu.com_ubuntu_dists_natty_multiverse_binary-amd64_Packages us.archive.ubuntu.com_ubuntu_dists_natty_multiverse_source_Sources us.archive.ubuntu.com_ubuntu_dists_natty_Release us.archive.ubuntu.com_ubuntu_dists_natty_Release.gpg us.archive.ubuntu.com_ubuntu_dists_natty_restricted_binary-amd64_Packages us.archive.ubuntu.com_ubuntu_dists_natty_restricted_source_Sources us.archive.ubuntu.com_ubuntu_dists_natty_universe_binary-amd64_Packages us.archive.ubuntu.com_ubuntu_dists_natty_universe_source_Sources us.archive.ubuntu.com_ubuntu_dists_natty-updates_main_binary-amd64_Packages us.archive.ubuntu.com_ubuntu_dists_natty-updates_main_source_Sources us.archive.ubuntu.com_ubuntu_dists_natty-updates_multiverse_binary-amd64_Packages us.archive.ubuntu.com_ubuntu_dists_natty-updates_multiverse_source_Sources us.archive.ubuntu.com_ubuntu_dists_natty-updates_Release us.archive.ubuntu.com_ubuntu_dists_natty-updates_Release.gpg us.archive.ubuntu.com_ubuntu_dists_natty-updates_restricted_binary-amd64_Packages us.archive.ubuntu.com_ubuntu_dists_natty-updates_restricted_source_Sources us.archive.ubuntu.com_ubuntu_dists_natty-updates_universe_binary-amd64_Packages us.archive.ubuntu.com_ubuntu_dists_natty-updates_universe_source_Sources

    Read the article

  • Oracle ADF 11g - Einladung zu den News Online Sessions - n&auml;chster Termin: 18. M&auml;rz 2011

    - by heidrun.walther
    Was ist ADF? ADF steht für Oracle Application Develoment Framework. ADF setzt die JEE Standards um und erweitert deren Funktionalität insbesondere im Hinblick auf die Vielzahl der zur Verfügung gestellten Komponenten (insbesondere im Hinblick auf die Visualisierung) und im Bereich der Ablaufsteuerung (Taskflows ersetzen Pageflows). ADF ist einer der Bausteine, auf denen die Entwicklung aller neuen Oracle Anwendungssysteme beruht (inkl. Vertical Solutions und der Administrationswerkzeuge). Das verwendete Entwicklungswerkzeug ist der Oracle JDeveloper. Rapid Application Development (RAD) wird durch eine deklarative, Metadaten getriebene Entwicklung ermöglicht, die auf allen Ebenen in starkem Maße mit Templating (also der Möglichkeit, mit vorgegebenen Mustern zu arbeiten) und mit Wiederverwendbarkeit arbeitet. Entwicklung und Dokumentation erfolgen in einem Schritt. ADF arbeitet nahtlos mit den anderen Oracle SOA Werkzeugen zusammen und bringt ein Rollen-/ Policy getriebenes Zugriffssystem mit. Es ist in das Oracle Identity Management integrierbar. ADF News Online Sessions? Die ADF News Online Sessions geben Tipps von Anwendern/Entscheidern für Anwender/Entscheider und bieten einen Ideenaustausch für den Einsatz von ADF bzw. für die Umsetzung von ADF Projekten. Die jeweiligen  Referenten sind Mitarbeiter von Oracle Partnerunternehmen und Oracle ADF-Spezialisten. Hier die Inhalte derVierte News-Staffel: 18.02.11 - Managing Migrationsprojekte: Forms - ADF / Erfahrungsbericht 04.03.11 - Using Groovy in Oracle ADF Business Components (english) 18.03.11 - Taskflow orientierte Entwicklung mit UI Shell 01.04.11 - erste Konzept, Überblick, Integration Desktop ADF 15.04.11 - ADF Best Practice: ADF BC Strukturierung 29.04.11 - Anpassung von ADF Anwendungen zur Laufzeit (Endanwender) mit Oracle WebCenter Sie erhalten die Einwahldaten für die jeweilige Session, wenn Sie sich entweder in den Mailverteiler aufnehmen lassen (Mail an [email protected]) oder über die ADF Community Seiten auf XING, indem Sie sich für die betreffende Session anmelden. Oracle ADF Community? Die Oracle ADF Community setzt sich das Ziel, Informationen und Erfahrungen zu Oracle ADF auszutauschen und damit die Entwicklungs-Plattform Oracle Application Development Framework (ADF) unter Entwicklern, Anwendern und IT-Dienstleistern bekannter zu machen. Sie sind herzlich eingeladen, sich aktiv daran zu beteiligen. Mehr unter ADF Community Gruppe auf Xing

    Read the article

  • Customize SharePoint list using InfoPath2010 form Part4

    - by ybbest
    Customize SharePoint list using InfoPath2010 form Part1 Customize SharePoint list using InfoPath2010 form Part2 Customize SharePoint list using InfoPath2010 form Part3 In this post, I’d like to show you how to create print functionality in InfoPath for SharePoint list. The print functionality is provided out of box in InfoPath form library; however it is not available in SharePoint list. Here are the steps to create the print functionality.You can download the new form here. 1. Create print page in the list by first copy and paste the displayifs.aspx and rename the file to Printifs.aspx. 2. Open the page in the SharePoint designer and copy the following javascript to the PlaceHolderTitleAreaClass ContentPlaceHolder. <script type="text/javascript"> $(document).ready(function(){ $("[id^='Ribbon']").hide(); $(".s4-title").hide(); $("[id='s4-leftpanel']").hide(); $("[id='s4-ribbonrow']").hide(); $("[id='s4-titlerow']").hide(); $("[id='s4-titlerow']").css("height", "0px"); $("body").css("background-color", "white"); $("body").css("zoom", "135%"); $("[id='MSO_ContentTable']").css("margin-left", "0px"); $("[id='MT-BodyContent']").css("width", "900px"); $(".MT-BodyArea").css("width", "900px"); $("[id='MT-Layout']").css("width", "900px"); $(".ms-bodyareacell").css("width", "900px"); $(".s4-wpTopTable").css("border", "none"); $("[id$='XmlFormView']").css("margin-left", "-80px"); $("body").css("margin-top", "-30px"); $(":contains('CAPEX')").css("border", "5px solid #FFCC00"); window.print(); }); </script> 3. Open InfoPath form for the list and create a field called PrintLink 4. Set the default value of printLink that points to the print page I just created before with the query string id.You can download the formula for the default value here. 5. Add a new image that looks like Print button on the display view, then I can set the url to the Print link Field. (The reason I did not use button is that you cannot set the navigate url for the button). 6.Set the url of the image to the PrintLInk field. 7.Next , create the print view. 8. Copy the contents from the display view to print view 9. Finally, go to the printifs.aspx and edit the InfoPath web part to set the view to PrintView. 9. Republish you form you will see the form as shown below 10. If you click the Print button, you will see the print page and print dialog,you can also add the company logo in the print page using css as well. 11.To deploy the customization,you can use the backup and restore content database approach , you can get more details from my previous blog post here.

    Read the article

  • Solaris: What comes next?

    - by alanc
    As you probably know by now, a few months ago, we released Solaris 11 after years of development. That of course means we now need to figure out what comes next - if Solaris 11 is “The First Cloud OS”, then what do we need to make future releases of Solaris be, to be modern and competitive when they're released? So we've been having planning and brainstorming meetings, and I've captured some notes here from just one of those we held a couple weeks ago with a number of the Silicon Valley based engineers. Now before someone sees an idea here and calls their product rep wanting to know what's up, please be warned what follows are rough ideas, and as I'll discuss later, none of them have any committment, schedule, working code, or even plan for integration in any possible future product at this time. (Please don't make me force you to read the full Oracle future product disclaimer here, you should know it by heart already from the front of every Oracle product slide deck.) To start with, we did some background research, looking at ideas from other Oracle groups, and competitive OS'es. We examined what was hot in the technology arena and where the interesting startups were heading. We then looked at Solaris to see where we could apply those ideas. Making Network Admins into Socially Networking Admins We all know an admin who has grumbled about being the only one stuck late at work to fix a problem on the server, or having to work the weekend alone to do scheduled maintenance. But admins are humans (at least most are), and crave companionship and community with their fellow humans. And even when they're alone in the server room, they're never far from a network connection, allowing access to the wide world of wonders on the Internet. Our solution here is not building a new social network - there's enough of those already, and Oracle even has its own Oracle Mix social network already. What we proposed is integrating Solaris features to help engage our system admins with these social networks, building community and bringing them recognition in the workplace, using achievement recognition systems as found in many popular gaming platforms. For instance, if you had a Facebook account, and a group of admin friends there, you could register it with our Social Network Utility For Facebook, and then your friends might see: Alan earned the achievement Critically Patched (April 2012) for patching all his servers. Matt is only at 50% - encourage him to complete this achievement today! To avoid any undue risk of advertising who has unpatched servers that are easier targets for hackers to break into, this information would be tightly protected via Facebook's world-renowned privacy settings to avoid it falling into the wrong hands. A related form of gamification we considered was replacing simple certfications with role-playing-game-style Experience Levels. Instead of just knowing an admin passed a test establishing a given level of competency, these would provide recruiters with a more detailed level of how much real-world experience an admin has. Achievements such as the one above would feed into it, but larger numbers of experience points would be gained by tougher or more critical tasks - such as recovering a down system, or migrating a service to a new platform. (As long as it was an Oracle platform of course - migrating to an HP or IBM platform would cause the admin to lose points with us.) Unfortunately, we couldn't figure out a good way to prevent (if you will) “gaming” the system. For instance, a disgruntled admin might decide to start ignoring warnings from FMA that a part is beginning to fail or skip preventative maintenance, in the hopes that they'd cause a catastrophic failure to earn more points for bolstering their resume as they look for a job elsewhere, and not worrying about the effect on your business of a mission critical server going down. More Z's for ZFS Our suggested new feature for ZFS was inspired by the worlds most successful Z-startup of all time: Zynga. Using the Social Network Utility For Facebook described above, we'd tie it in with ZFS monitoring to help you out when you find yourself in a jam needing more disk space than you have, and can't wait a month to get a purchase order through channels to buy more. Instead with the click of a button you could post to your group: Alan can't find any space in his server farm! Can you help? Friends could loan you some space on their connected servers for a few weeks, knowing that you'd return the favor when needed. ZFS would create a new filesystem for your use on their system, and securely share it with your system using Kerberized NFS. If none of your friends have space, then you could buy temporary use space in small increments at affordable rates right there in Facebook, using your Facebook credits, and then file an expense report later, after the urgent need has passed. Universal Single Sign On One thing all the engineers agreed on was that we still had far too many "Single" sign ons to deal with in our daily work. On the web, every web site used to have its own password database, forcing us to hope we could remember what login name was still available on each site when we signed up, and which unique password we came up with to avoid having to disclose our other passwords to a new site. In recent years, the web services world has finally been reducing the number of logins we have to manage, with many services allowing you to login using your identity from Google, Twitter or Facebook. So we proposed following their lead, introducing PAM modules for web services - no more would you have to type in whatever login name IT assigned and try to remember the password you chose the last time password aging forced you to change it - you'd simply choose which web service you wanted to authenticate against, and would login to your Solaris account upon reciept of a cookie from their identity service. Pinning notes to the cloud We also all noted that we all have our own pile of notes we keep in our daily work - in text files in our home directory, in notebooks we carry around, on white boards in offices and common areas, on sticky notes on our monitors, or on scraps of paper pinned to our bulletin boards. The contents of the notes vary, some are things just for us, some are useful for our groups, some we would share with the world. For instance, when our group moved to a new building a couple years ago, we had a white board in the hallway listing all the NIS & DNS servers, subnets, and other network configuration information we needed to set up our Solaris machines after the move. Similarly, as Solaris 11 was finishing and we were all learning the new network configuration commands, we shared notes in wikis and e-mails with our fellow engineers. Users may also remember one of the popular features of Sun's old BigAdmin site was a section for sharing scripts and tips such as these. Meanwhile, the online "pin board" at Pinterest is taking the web by storm. So we thought, why not mash those up to solve this problem? We proposed a new BigAddPin site where users could “pin” notes, command snippets, configuration information, and so on. For instance, once they had worked out the ideal Automated Installation manifest for their app server, they could pin it up to share with the rest of their group, or choose to make it public as an example for the world. Localized data, such as our group's notes on the servers for our subnet, could be shared only to users connecting from that subnet. And notes that they didn't want others to see at all could be marked private, such as the list of phone numbers to call for late night pizza delivery to the machine room, the birthdays and anniversaries they can never remember but would be sleeping on the couch if they forgot, or the list of automatically generated completely random, impossible to remember root passwords to all their servers. For greater integration with Solaris, we'd put support right into the command shells — redirect output to a pinned note, set your path to include pinned notes as scripts you can run, or bring up your recent shell history and pin a set of commands to save for the next time you need to remember how to do that operation. Location service for Solaris servers A longer term plan would involve convincing the hardware design groups to put GPS locators with wireless transmitters in future server designs. This would help both admins and service personnel trying to find servers in todays massive data centers, and could feed into location presence apps to help show potential customers that while they may not see many Solaris machines on the desktop any more, they are all around. For instance, while walking down Wall Street it might show “There are over 2000 Solaris computers in this block.” [Note: this proposal was made before the recent media coverage of a location service aggregrator app with less noble intentions, and in hindsight, we failed to consider what happens when such data similarly falls into the wrong hands. We certainly wouldn't want our app to be misinterpreted as “There are over $20 million dollars of SPARC servers in this building, waiting for you to steal them.” so it's probably best it was rejected.] Harnessing the power of the GPU for Security Most modern OS'es make use of the widespread availability of high powered GPU hardware in today's computers, with desktop environments requiring 3-D graphics acceleration, whether in Ubuntu Unity, GNOME Shell on Fedora, or Aero Glass on Windows, but we haven't yet made Solaris fully take advantage of this, beyond our basic offering of Compiz on the desktop. Meanwhile, more businesses are interested in increasing security by using biometric authentication, but must also comply with laws in many countries preventing discrimination against employees with physical limations such as missing eyes or fingers, not to mention the lost productivity when employees can't login due to tinted contacts throwing off a retina scan or a paper cut changing their fingerprint appearance until it heals. Fortunately, the two groups considering these problems put their heads together and found a common solution, using 3D technology to enable authentication using the one body part all users are guaranteed to have - pam_phrenology.so, a new PAM module that uses an array USB attached web cams (or just one if the user is willing to spin their chair during login) to take pictures of the users head from all angles, create a 3D model and compare it to the one in the authentication database. While Mythbusters has shown how easy it can be to fool common fingerprint scanners, we have not yet seen any evidence that people can impersonate the shape of another user's cranium, no matter how long they spend beating their head against the wall to reshape it. This could possibly be extended to group users, using modern versions of some of the older phrenological studies, such as giving all users with long grey beards access to the System Architect role, or automatically placing users with pointy spikes in their hair into an easy use mode. Unfortunately, there are still some unsolved technical challenges we haven't figured out how to overcome. Currently, a visit to the hair salon causes your existing authentication to expire, and some users have found that shaving their heads is the only way to avoid bad hair days becoming bad login days. Reaction to these ideas After gathering all our notes on these ideas from the engineering brainstorming meeting, we took them in to present to our management. Unfortunately, most of their reaction cannot be printed here, and they chose not to accept any of these ideas as they were, but they did have some feedback for us to consider as they sent us back to the drawing board. They strongly suggested our ideas would be better presented if we weren't trying to decipher ink blotches that had been smeared by the condensation when we put our pint glasses on the napkins we were taking notes on, and to that end let us know they would not be approving any more engineering offsites in Irish themed pubs on the Friday of a Saint Patrick's Day weekend. (Hopefully they mean that situation specifically and aren't going to deny the funding for travel to this year's X.Org Developer's Conference just because it happens to be in Bavaria and ending on the Friday of the weekend Oktoberfest starts.) They recommended our research techniques could be improved over just sitting around reading blogs and checking our Facebook, Twitter, and Pinterest accounts, such as considering input from alternate viewpoints on topics such as gamification. They also mentioned that Oracle hadn't fully adopted some of Sun's common practices and we might have to try harder to get those to be accepted now that we are one unified company. So as I said at the beginning, don't pester your sales rep just yet for any of these, since they didn't get approved, but if you have better ideas, pass them on and maybe they'll get into our next batch of planning.

    Read the article

  • What You Said: How Do You Sync Your Files Between Your Devices?

    - by Jason Fitzpatrick
    Earlier this week we asked you to share your tricks and techniques for keeping files synced between your different devices. Now we’re back to highlight how you do it. Overwhelmingly, you do it with Dropbox. Despite the proliferation of different platforms there has been little inroads made into any sort of universal syncing. We heard from quite a few different readers and by far the most popular option was to use Dropbox to ensure that you could get the music and documents you wanted whether you were on your desktop, laptop, netbook, iPhone, or Android device. In the same breath however, nearly all of your added on an additional service. The real message, it would seem, is that there simply isn’t a service good enough to meet all of the needs most users have, all of the time. The most common response to our Ask the Readers question was “Dropbox and…”; this pattern is illustrated nicely in the following quotes. Kim writes: Dropbox for all kinds of things. (Would also use Sugarsync, but it doesn’t support Linux.) Lastpass for passwords. Xmarks for bookmarks, although I’m going to try Firefox Sync soon. Evernote for things like shell commands I might want someday. Google Beta for music, once I get it uploaded. I have an Amazon account too, but Google gives you more space. Gmail. Michael finds himself in a similar situation and writes: How to Make and Install an Electric Outlet in a Cabinet or DeskHow To Recover After Your Email Password Is CompromisedHow to Clean Your Filthy Keyboard in the Dishwasher (Without Ruining it)

    Read the article

  • Bitmask data insertions in SSDT Post-Deployment scripts

    - by jamiet
    On my current project we are using SQL Server Data Tools (SSDT) to manage our database schema and one of the tasks we need to do often is insert data into that schema once deployed; the typical method employed to do this is to leverage Post-Deployment scripts and that is exactly what we are doing. Our requirement is a little different though, our data is split up into various buckets that we need to selectively deploy on a case-by-case basis. I was going to use a SQLCMD variable for each bucket (defaulted to some value other than “Yes”) to define whether it should be deployed or not so we could use something like this in our Post-Deployment script: IF ($(DeployBucket1Flag) = 'Yes')BEGIN   :r .\Bucket1.data.sqlENDIF ($(DeployBucket2Flag) = 'Yes')BEGIN   :r .\Bucket2.data.sqlENDIF ($(DeployBucket3Flag) = 'Yes')BEGIN   :r .\Bucket3.data.sqlEND That works fine and is, I’m sure, a very common technique for doing this. It is however slightly ugly because we have to litter our deployment with various SQLCMD variables. My colleague James Rowland-Jones (whom I’m sure many of you know) suggested another technique – bitmasks. I won’t go into detail about how this works (James has already done that at Using a Bitmask - a practical example) but I’ll summarise by saying that you can deploy different combinations of the buckets simply by supplying a different numerical value for a single SQLCMD variable. Each bit of that value’s binary representation signifies whether a particular bucket should be deployed or not. This is better demonstrated using the following simple script (which can be easily leveraged inside your Post-Deployment scripts): /* $(DeployData) is a SQLCMD variable that would, if you were using this in SSDT, be declared in the SQLCMD variables section of your project file. It should contain a numerical value, defaulted to 0. In this example I have declared it using a :setvar statement. Test the affect of different values by changing the :setvar statement accordingly. Examples: :setvar DeployData 1 will deploy bucket 1 :setvar DeployData 2 will deploy bucket 2 :setvar DeployData 3   will deploy buckets 1 & 2 :setvar DeployData 6   will deploy buckets 2 & 3 :setvar DeployData 31  will deploy buckets 1, 2, 3, 4 & 5 */ :setvar DeployData 0 DECLARE  @bitmask VARBINARY(MAX) = CONVERT(VARBINARY,$(DeployData)); IF (@bitmask & 1 = 1) BEGIN     PRINT 'Bucket 1 insertions'; END IF (@bitmask & 2 = 2) BEGIN     PRINT 'Bucket 2 insertions'; END IF (@bitmask & 4 = 4) BEGIN     PRINT 'Bucket 3 insertions'; END IF (@bitmask & 8 = 8) BEGIN     PRINT 'Bucket 4 insertions'; END IF (@bitmask & 16 = 16) BEGIN     PRINT 'Bucket 5 insertions'; END An example of running this using DeployData=6 The binary representation of 6 is 110. The second and third significant bits of that binary number are set to 1 and hence buckets 2 and 3 are “activated”. Hope that makes sense and is useful to some of you! @Jamiet P.S. I used the awesome HTML Copy feature of Visual Studio’s Productivity Power Tools in order to format the T-SQL code above for this blog post.

    Read the article

  • Converting from mp4 to Xvid avi using avconv?

    - by Ricardo Gladwell
    I normally use avidemux to convert mp4s to Xvid AVI for my Philips Streamium SLM5500. Normally I select MPEG-4 ASP (Xvid) at Two Pass with an average bitrate f 1500kb/s for video and AC3 (lav) audio and it converts correctly. However, I'm trying to using avconv so I can automate the process with a script, but when I do this the video stutters and stops playing part way through. I have a suspicion its something to do with a faulty audio conversion. The commands I'm using are as follows: avconv -y -i video.mp4 -pass 1 -vtag xvid -c:a ac3 -b:a 128k -b:v 1500k -f avi /dev/null avconv -y -i video.mp4 -pass 2 -vtag xvid -c:a ac3 -b:a 128k -b:v 1500k -f avi video.avi There is a bewildering array of arguments for avconv. Is there something I'm doing wrong? Is there a way I can script avidemux from a headless server? Please see command line output: $ avconv -y -i video.mp4 -pass 1 -vtag xvid -an -b:v 1500k -f avi /dev/null avconv version 0.8.5-6:0.8.5-0ubuntu0.12.10.1, Copyright (c) 2000-2012 the Libav developers built on Jan 24 2013 14:49:20 with gcc 4.7.2 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video.mp4': Metadata: major_brand : isom minor_version : 1 compatible_brands: isomavc1 creation_time : 2013-02-04 13:53:38 Duration: 00:44:09.16, start: 0.000000, bitrate: 669 kb/s Stream #0.0(und): Video: h264 (High), yuv420p, 720x404 [PAR 1:1 DAR 180:101], 538 kb/s, 25 fps, 25 tbr, 100 tbn, 50 tbc Metadata: creation_time : 2013-02-04 13:53:38 Stream #0.1(und): Audio: ac3, 44100 Hz, stereo, s16, 127 kb/s Metadata: creation_time : 2013-02-04 13:53:42 [buffer @ 0x7f4c40] w:720 h:404 pixfmt:yuv420p Output #0, avi, to '/dev/null': Metadata: major_brand : isom minor_version : 1 compatible_brands: isomavc1 creation_time : 2013-02-04 13:53:38 ISFT : Lavf53.21.1 Stream #0.0(und): Video: mpeg4, yuv420p, 720x404 [PAR 1:1 DAR 180:101], q=2-31, pass 1, 1500 kb/s, 25 tbn, 25 tbc Metadata: creation_time : 2013-02-04 13:53:38 Stream mapping: Stream #0:0 -> #0:0 (h264 -> mpeg4) Press ctrl-c to stop encoding frame=66227 fps=328 q=2.0 Lsize= 0kB time=2649.16 bitrate= 0.0kbits/s video:401602kB audio:0kB global headers:0kB muxing overhead -100.000000% $ avconv -y -i video.mp4 -pass 2 -vtag xvid -c:a ac3 -b:a 128k -b:v 1500k -f avi video.avi avconv version 0.8.5-6:0.8.5-0ubuntu0.12.10.1, Copyright (c) 2000-2012 the Libav developers built on Jan 24 2013 14:49:20 with gcc 4.7.2 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video.mp4': Metadata: major_brand : isom minor_version : 1 compatible_brands: isomavc1 creation_time : 2013-02-04 13:53:38 Duration: 00:44:09.16, start: 0.000000, bitrate: 669 kb/s Stream #0.0(und): Video: h264 (High), yuv420p, 720x404 [PAR 1:1 DAR 180:101], 538 kb/s, 25 fps, 25 tbr, 100 tbn, 50 tbc Metadata: creation_time : 2013-02-04 13:53:38 Stream #0.1(und): Audio: ac3, 44100 Hz, stereo, s16, 127 kb/s Metadata: creation_time : 2013-02-04 13:53:42 [buffer @ 0x12b4f00] w:720 h:404 pixfmt:yuv420p Incompatible sample format 's16' for codec 'ac3', auto-selecting format 'flt' [mpeg4 @ 0x12b3ec0] [lavc rc] Using all of requested bitrate is not necessary for this video with these parameters. Output #0, avi, to 'video.avi': Metadata: major_brand : isom minor_version : 1 compatible_brands: isomavc1 creation_time : 2013-02-04 13:53:38 ISFT : Lavf53.21.1 Stream #0.0(und): Video: mpeg4, yuv420p, 720x404 [PAR 1:1 DAR 180:101], q=2-31, pass 2, 1500 kb/s, 25 tbn, 25 tbc Metadata: creation_time : 2013-02-04 13:53:38 Stream #0.1(und): Audio: ac3, 44100 Hz, stereo, flt, 128 kb/s Metadata: creation_time : 2013-02-04 13:53:42 Stream mapping: Stream #0:0 -> #0:0 (h264 -> mpeg4) Stream #0:1 -> #0:1 (ac3 -> ac3) Press ctrl-c to stop encoding Input stream #0:1 frame changed from rate:44100 fmt:s16 ch:2 to rate:44100 fmt:flt ch:2 frame=66227 fps=284 q=2.2 Lsize= 458486kB time=2649.13 bitrate=1417.8kbits/s video:413716kB audio:41393kB global headers:0kB muxing overhead 0.741969%

    Read the article

  • Alert for Forms customers running Oracle Forms 10g

    - by Grant Ronald
    Doesn’t time fly!  While you might have been happily running your Forms 10g applications for about 5 years or so now, the end of premier support is creeping up and you need to start planning for a move to Oracle Forms 11g. The premier support end date in December 31st 2011 and this is documented in Note: 1290974.1 available from MOS. So how much of an impact is this going to be?  Maybe not as much as you think.  While Forms 11g is based on WebLogic Server (WLS),10g was based  on OC4J.  That in itself doesn’t impact Forms much.  In most case it will simply be a recompile of your Forms source files and redeploy on WLS 11g. The Forms builder is the same in 11g as in 10g although its currently not available as a separate download from the main middleware bundle.  You can also look at a WLS Basic license option which means you shouldn’t have to shell out on upgrading to a WLS Suite license option. So what’s the proof in this being a relatively straightforward upgrade?  Well, we’ve had a big uptake of Forms 11g already (which has itself been out for over 2 years).  Read about BT Expedite’s upgrade where “The upgrade of Forms from Oracle Forms 10g to Oracle Forms 11g was relatively simple and on the whole, was just a recompilation”.  Or AMEC where “This has been one of the easiest Forms conversions we’ve ever done and it was a simple recompile in all cases” So if you are on 10g (or even earlier versions) I’d strongly consider starting your planning for an upgrade to 11g now. As always, if you have any questions about this you can post on the OTN forums.

    Read the article

  • How do I install Revenge of the Titans?

    - by Akash
    I've downloaded the .deb file of Revenge of the Titans, and installed it using Ubuntu Software Center. Now, when I try to launch it using the software launcher nothing happens. Any ideas? The .deb file was downloaded from the Humble Indie Bundle. I am unable to launch it from the terminal ( the command revenge-of-the-titans says command not found ). I also tried the .tar.gz. When I extract it and run ./revenge.sh , nothing happens. No output on the terminal or anything at all. I have set chmod 777 revenge.sh as well. The command /opt/revengeofthetitans/revenge.sh does not give any output. If I run gedit /opt/revengeofthetitans/revenge.sh in the terminal: > #!/bin/bash > # > # revenge.sh > # > ############################################################################### > > SCRIPT="`basename $0`" > GAMEDIR="${HOME}/.revenge_of_the_titans_1.80" LOGFILE="${GAMEDIR}/${SCRIPT}.log" > INSTDIR="`dirname $0`" ; cd > "${INSTDIR}" ; INSTDIR="`pwd`" > > [[ ! -d "${GAMEDIR}" ]] && mkdir -m > 0755 "${GAMEDIR}" > > JARPATH="patch.jar:RevengeOfTheTitans.jar:data-hib.jar:gfx.jar:fonts.jar:images.jar:music.jar:fx-mono.jar:fx-stereo.jar:gamecommerce.jar:common.jar:spgl-lite.jar:lwjgl.jar:lwjgl_util.jar:jorbis.jar:jinput.jar" > > # XMODIFIERS is cleared here to prevent SCIM screwing up keyboard > input XMODIFIERS= java \ > -noverify \ > -Djava.library.path="${INSTDIR}" \ > -Dorg.lwjgl.util.NoChecks=true \ > -Dorg.lwjgl.librarypath="${INSTDIR}" \ > -Dnet.puppygames.applet.Launcher.resources=/resources-hib.dat > \ > -Dnet.puppygames.applet.Game.gameResource=game.hib > \ > -XX:MaxGCPauseMillis=3 \ > -Xms64m \ > -Xmx375m \ > -Xincgc \ > -cp "${JARPATH}" \ > net.puppygames.applet.Launcher \ > "$@" \ > >"${LOGFILE}" 2>&1 > > exit 0 > > # > # EOF > # > ###############################################################################

    Read the article

  • How do I get Intel 845g graphics working?

    - by Rayson Jimenez
    I need help enabling Intel 845g to run with intel drivers. I'v looked everywhere on the net with no joy. I can't seem to get intel video drivers to load the gui, only with vesa drivers. it drops me straight to the shell prompt. Xorg.0.log shows the following. Any help would be GREATLY appreciated. [ 244.843] (II) LoadModule: "intel" [ 244.843] (II) Loading /usr/lib/xorg/modules/drivers/intel_drv.so [ 244.844] (II) Module intel: vendor="X.Org Foundation" [ 244.844] compiled for 1.9.0, module version = 2.12.0 [ 244.844] Module class: X.Org Video Driver [ 244.844] ABI class: X.Org Video Driver, version 8.0 [ 244.844] (II) intel: Driver for Intel Integrated Graphics Chipsets: i810, i810-dc100, i810e, i815, i830M, 845G, 852GM/855GM, 865G, 915G, E7221 (i915), 915GM, 945G, 945GM, 945GME, Pineview GM, Pineview G, 965G, G35, 965Q, 946GZ, 965GM, 965GME/GLE, G33, Q35, Q33, GM45, 4 Series, G45/G43, Q45/Q43, G41, B43, B43, Clarkdale, Arrandale, Sandybridge, Sandybridge, Sandybridge, Sandybridge, Sandybridge, Sandybridge, Sandybridge [ 244.844] (--) using VT number 8 [ 244.971] (EE) intel(0): No kernel modesetting driver detected. [ 244.971] (II) UnloadModule: "intel" [ 244.971] (EE) Screen(s) found, but none have a usable configuration. [ 244.971] Fatal server error: [ 244.971] no screens found [ 244.971] Please consult the The X.Org Foundation support at http://wiki.x.org for help. [ 244.971] Please also check the log file at "/var/log/Xorg.0.log" for additional information. [ 244.971] [ 245.213] ddxSigGiveUp: Closing log

    Read the article

  • Running Non-profit Web Applications on Cloud/Dedicated Hosting [closed]

    - by cillosis
    Possible Duplicate: How to find web hosting that meets my requirements? I often times build web applications purely because I enjoy it. I like building useful tools or open source applications that don't come with a price tag. That being said, many of these applications can be quite complex requiring services beyond shared hosting (ex. specific PHP extensions). This leaves me with two options: Make the web application less complex and run on shared hosting. Fork out money for cloud or dedicated/VPS hosting. Considering the application is free (I don't make money off of it intentionally), the money for hosting comes out of my own pocket. I know I am not alone in this sticky situation. So the question is, what are the hosting options that provide more advanced features such as shell access via SSH, ability to install specific software/extensions (ex. if I wish to use a NoSQL DB such as Redis, MongoDB, or Cassandra), etc., at a free or low price point? I know free usually equates to bad/unreliable hosting -- but it's not always the case. There are a couple providers with free plans I know of: Amazon EC2 - Free micro-instance for 1 year AppHarbor - Cloud based .NET web application hosting w/ free plan. What else is available for hosting of non-profit applications?

    Read the article

  • pdflatex reads .eps files saved in OS/X, but not in Ubuntu

    - by David B Borenstein
    Sorry if this is a stupid question; I'm a newbie. I am preparing a manuscript in LaTeX. The journal (Physical Biology, an IOP publication) requires that figures be saved in .eps format, so I am trying to do that. However, I cannot get my LaTeX file to build when I have generated the .eps files on my Ubuntu computer. If I save the images on my Mac, the file build just fine. So far, I have tried saving images in ImageJ, FIJI and Inkscape. The same problem occurs in all three. When using kile, I get the following error: /usr/share/texmf-texlive/tex/latex/oberdiek/epstopdf-base.sty:0: Shell escape feature is not enabled. In TexWorks, the error is different, but still there: Package pdftex.def Error: File `./figures4/figure4a-eps-converted-to.pdf' not found. Now, if I fire up Inkscape, FIJI or ImageJ on OS/X, everything works fine. The Mac also can't build with the Ubuntu-saved images. The images generated on the Ubuntu machine open fine using Document Viewer. I am building the same LaTeX file on both computers, with the exact same results. The header of my LaTeX file is: \documentclass[12pt]{iopart} \usepackage{graphicx} \usepackage{epstopdf} \usepackage{parskip} \usepackage{color} \usepackage{iopams} And then the code for the figure is: \begin{figure} \center{\includegraphics[width=4in] {./figures4/figure4a.eps}} \footnotesize{\caption{ \label{fig:4a} (4a) lorem ipsum dolor sic amet.}} \end{figure} I'd be happy to send an example of both .eps files. Again, sorry if this is a dumb question. I tried everything I could think of before posting here. Thanks, David

    Read the article

  • Windows Phone 7 Prototype 002: Animated Page Transitions + Writeable Bitmaps

    Motion is a key part of WP7 application development. Without motion, the WP7 UI is just a bunch of text. Not nearly as exciting. To delight users, you can add some transitions between pages.  The sample app includes some storyboards to animate between two pages. Other people have noted that you can just use the transitioning content control form the SIlverlight toolkit. Peter Torr also had a nice animating frame control in his mix demo code (his blog has some other great code samples for WP7 app dev). I took some of those concepts and the code from the TransitioningContentControl to make a new animating frame control. In this prototype, the frame takes a snapshot of the old content and the new content using writeable bitmaps and animates the snapshots and then replaces those with the actual page. The benefit is smoother animation on pages with lots of controls. Otherwise, if you have a large panorama, it might not animate that cleanly.  Like the other solutions based on the TransitioningContentControl, you can centralize all the animations in one place and not have to handle them on each individual page. Peters code also had a nice snippet for choosing the animation based on the navigation direction so you could just have a forward / backward animation and not have to do anything on each page. You could also probably add some more advanced transitions using pixel shaders or make an default no transition state if you wanted to have some specific animation on a page where individual  controls transitioned out differently like some of the WP7 shell apps. Sample Code 100% guaranteed to work on my emulatorDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Spring Roo Database Reverse Engineer with Oracle

    - by kerry
    So you are trying to reverse engineer an Oracle database with roo? Unfortunately, due to licensing restrictions with the Oracle JDBC Drivers, this is a little difficult. There are a few blog posts and forum threads that address the problem but I figured I would post what worked for me here. First, you need to download the appropriate Oracle Drivers from Oracle. The required login, stringent password requirements, nosy registration form, and general system instability made this a pretty painful step for me. I’d also like to say that companies that have password requirements that don’t allow symbols (or any other non-standard requirement) have a special place in my heart. Having to recover my password every time I go to your site virtually guarantees I will only go there when I absolutely have to (not often). Anyways, once you have it downloaded you need to install is with maven: mvn install:install-file -Dfile=~/Downloads/ojdbc6.jar -DgroupId=com.oracle -DartifactId=ojdbc6 -Dversion=11.2.0.3 -Dpackaging=jar -DgeneratePom=true Here comes the fun part. You need to create an osgi wrapper for the driver to install it in roo. Otherwise, roo cannot see the driver. Create a new folder and put the contents of the oracle roo addon pom gist I created. Now build it with maven. You may want to change some of the artifact ids and dependencies for your particular situation. mvn package No open a roo shell and execute the following command: osgi install --url file:///Users/me/my-osgi-project/target/the-jar-it-built.jar Now run (in roo): jpa setup --provider HIBERNATE --database ORACLE dependency remove --groupId com.oracle --artifactId ojdbc14 --version 10.2.0.2 dependency add --groupId com.oracle --artifactId ojdbc6 --version 11.2.0.3 database properties set --key database.driverClassName --value oracle.jdbc.OracleDriver database properties set --key database.url --value jdbc:oracle:thin:@%YOUR_CONNECTION_INFO% database properties set --key database.username --value %YOUR_USERNAME% database properties set --key database.password --value %YOUR_PASSWORD% database reverse engineer --schema %YOUR_SCHEMA% --package ~.domain If you have any package loading exceptions when running the reverse engineer command you can uninstall the osgi bundle, set the package to optional in the osgi pom in the IncludedPackages tag (javax.some.package.*;resolution:=optional) rebuild, then reinstall in roo.

    Read the article

  • Logging connection strings

    If you some of the dynamic features of SSIS such as package configurations or property expressions then sometimes trying to work out were your connections are pointing can be a bit confusing. You will work out in the end but it can be useful to explicitly log this information so that when things go wrong you can just review the logs. You may wish to develop this idea further and encapsulate such logging into a custom task, but for now lets keep it simple and use the Script Task. The Script Task code below will raise an Information event showing the name and connection string for a connection. Imports System Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim fireAgain As Boolean ' Get the connection string, we need to know the name of the connection Dim connectionName As String = "My OLE-DB Connection" Dim connectionString As String = Dts.Connections(connectionName).ConnectionString ' Format the message and log it via an information event Dim message As String = String.Format("Connection ""{0}"" has a connection string of ""{1}"".", _ connectionName, connectionString) Dts.Events.FireInformation(0, "Information", message, Nothing, 0, fireAgain) Dts.TaskResult = Dts.Results.Success End Sub End Class Building on that example it is probably more flexible to log all connections in a package as shown in the next example. Imports System Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim fireAgain As Boolean ' Loop through all connections in the package For Each connection As ConnectionManager In Dts.Connections ' Get the connection string and log it via an information event Dim message As String = String.Format("Connection ""{0}"" has a connection string of ""{1}"".", _ connection.Name, connection.ConnectionString) Dts.Events.FireInformation(0, "Information", message, Nothing, 0, fireAgain) Next Dts.TaskResult = Dts.Results.Success End Sub End Class By using the Information event it makes it readily available in the designer, for example the Visual Studio Output window (Ctrl+Alt+O) or the package designer Execution Results tab, and also allows you to readily control the logging by choosing which events to log in the normal way. Now before somebody starts commenting that this is a security risk, I would like to highlight good practice for building connection managers. Firstly the Password property, or any other similar sensitive property is always defined as write-only, and secondly the connection string property only uses the public properties to assemble the connection string value when requested. In other words the connection string will never contain the password. I have seen a couple of cases where this is not true, but that was just bad development by third-parties, you won’t find anything like that in the box from Microsoft.   Whilst writing this code it made me wish that there was a custom log entry that you could just turn on that did this for you, but alas connection managers do not even seem to support custom events. It did however remind me of a very useful event that is often overlooked and fits rather well alongside connection string logging, the Execute SQL Task’s custom ExecuteSQLExecutingQuery event. To quote the help reference Custom Messages for Logging - Provides information about the execution phases of the SQL statement. Log entries are written when the task acquires connection to the database, when the task starts to prepare the SQL statement, and after the execution of the SQL statement is completed. The log entry for the prepare phase includes the SQL statement that the task uses. It is the last part that is so useful, how often have you used an expression to derive a SQL statement and you want to log that to make sure the correct SQL is being returned? You need to turn it one, by default no custom log events are captured, but I’ll refer you to a walkthrough on setting up the logging for ExecuteSQLExecutingQuery by Jamie.

    Read the article

  • connecting to freenx server : configuration error

    - by Sandeep
    I am not able to pin point what is missing. I have configured freenx-server (useradd, passwd etc). However, server drops the connection after authentication. Please note my server is ubuntu 10.04 and client recent version. Below is the error log NX> 203 NXSSH running with pid: 6016 NX> 285 Enabling check on switch command NX> 285 Enabling skip of SSH config files NX> 285 Setting the preferred NX options NX> 200 Connected to address: 192.168.2.2 on port: 22 NX> 202 Authenticating user: nx NX> 208 Using auth method: publickey HELLO NXSERVER - Version 3.2.0-74-SVN OS (GPL, using backend: 3.5.0) NX> 105 hello NXCLIENT - Version 3.2.0 NX> 134 Accepted protocol: 3.2.0 NX> 105 SET SHELL_MODE SHELL NX> 105 SET AUTH_MODE PASSWORD NX> 105 login NX> 101 User: sandeep NX> 102 Password: NX> 103 Welcome to: ubuntu user: sandeep NX> 105 listsession --user="sandeep" --status="suspended,running" --geometry="1366x768x24+render" --type="unix-gnome" NX> 127 Sessions list of user 'sandeep' for reconnect: Display Type Session ID Options Depth Screen Status Session Name ------- ---------------- -------------------------------- -------- ----- -------------- ----------- ------------------------------ NX> 148 Server capacity: not reached for user: sandeep NX> 105 startsession --link="lan" --backingstore="1" --encryption="1" --cache="16M" --images="64M" --shmem="1" --shpix="1" --strict="0" --composite="1" --media="0" --session="t" --type="unix-gnome" --geometry="1366x768" --client="linux" --keyboard="pc105/us" --screeninfo="1366x768x24+render" ssh_exchange_identification: Connection closed by remote host NX> 280 Exiting on signal: 15

    Read the article

  • Is the difference between sudo and gksu the same as the difference between sudo -i and sudo -s?

    - by fred.bear
    Is the difference between sudo cmd and gksu cmd, the same as the difference between starting a shell with sudo -i and sudo -s? ... or put another way, Is sudo cmd the same as sudo -i cmd and gksu cmd the same as sudo -s cmd? EDIT: Based on what I read on an Ubuntu Documentation Page where it says: You should never use normal sudo to start graphical applications as root. You should use gksudo (kdesudo on Kubuntu) to run such programs. gksudo sets HOME=~root, and copies .Xauthority to a tmp directory. This prevents files in your home directory becoming owned by root. (AFAICT, this is all that's special about the environment of the started process with gksudo vs. sudo). The "AFAICT" doen't really give me full confidence that there is nothing more to it. (..a belated UPDATE: I tested his commemnt today (2 months later) about: "This prevents files in your home directory becoming owned by root." All files I created via sudo/gksu were all owned by "root", and the group was "root".) I've read parts of the info sudo and noticed the -i and -s seem to be doing the same thing as the AFAICT environment issue... but I hit overload.. so I've asked my question here. PS.. My question is not about sudo vs gksu .. It is more about: Is gksu the same as sudo -s .. and if not, how do they differ?

    Read the article

  • Lightest weight ubuntu desktop for text editing in big terminal windows?

    - by Kevin Pauli
    I have an older windows laptop onto which I'm installing ubuntu within a VM. My goal is just to use terminal-based linux tools such as vim and shell scripting. I don't give a hoot about any gui for this box. So I first installed ubuntu minimalcd and chose "Basic Ubuntu Server". Upon boot, the text-based terminal came up and I logged in, but the problem is it only gives me 80 columns. I want to do terminal mode vim but have a couple hundred columns to take advantage of my large monitor. If you happen to know how to do that, please see my question here . This post is assuming that the other question is not answerable, and that I will need a desktop to get more than 80 columns in a terminal window. So if that is the case, I want the lightest weight one possible, because this is older hardware and all I want is the ability to have nice big text-based terminal windows for editing text. From the ubuntu minimal CD, I see options for Edubuntu, Kubuntu, etc. Which one of the available desktops would be a good choice for my needs?

    Read the article

  • Creating a GTK theme, but Qt and Java apps are not affected, and title bar button layout is ugly

    - by Mr. Pixel
    I'm playing with a gtk2 / gtk3 theme which I use in the Mate desktop. Everything is looking well, even gtk3 apps, but I still have 3 important issues: Java apps ignore the theme QT apps ignore the theme I'm using those nice ubuntu 10 title bar buttons, but the problem is, when only the close button appears, the title bar looks ugly. Can I make it so that it shows the two other buttons, but disabled? I don't know how Ubuntu 10 handled this. Here's a screenshot showing the 3 problems (above is a small java app, below is a Qt app): Under my previous desktop environments, Unity and Cinnamon, both apps seemed to be taking the right theme correctly, but I did not use my custom theme yet. Cinnamon is based on gnome-shell by the way, and mate is a gnome2-fork. Please note that the shown java app explicitely tries to load the gtk theme at runtime. By default, java apps don't, but this one has the necessary code, which worked in unity and cinnamon. Any suggestions how I could make my theme better so these problems disappear? Thank you very much!

    Read the article

  • basic beginning emacs questions - install latest version and pick appropriate UI

    - by MountainX
    I'm running the latest Kubuntu (12.04 beta 2) and I would like to run the latest emacs (currently v24). The repos are one version behind. What's the best way to install v24 or later (and avoid future version conflicts)? Also, is there any reason not to aways use the GUI version of emacs if X is running? For example, could I set the GUI emacs version as the default text editor and use it to edit cron jobs (crontab -e)? I'm assuming the answer is yes, but since I haven't done that yet (my default editor is nano), I want to check if there are reasons I should leave nano as the default editor. Usually when I'm working on the command line I end up using nano. Now that I think about it, I have no idea why I keep doing that. Is there any downside to calling a GUI editor when working in an X terminal? EDIT: I briefly tested these two versions GNU Emacs 24.0.94.1 (x86_64-pc-linux-gnu, GTK+ Version 3.3.20) from GNU Emacs 23.3.1 (x86_64-pc-linux-gnu) installed by default in Kubuntu. This post explains some of the differences between versions. Unfortunately (for me) the defaults installed version (23.3.1, 23.3+1-1ubuntu9) is the nox version. Package: emacs23-nox Status: install ok installed Version: 23.3+1-1ubuntu9 Replaces: emacs23, emacs23-gtk, emacs23-lucid The package with version 24 opens in GUI mode by default. That's what I prefer. Some of the version 24 changes that interest me are listed in the references below. But there appear to be a multitude of different packages and versions I could install. References: What’s New In Emacs 24 (part 1) | Mastering Emacs http://www.masteringemacs.org/articles/2011/12/06/what-is-new-in-emacs-24-part-1/ " shell-mode uses pcomplete rules, with the standard completion UI. Yowzah! There’s a lot of cool, new functionality hidden away in this gem of a change." EmacsWiki: Recent Changes http://www.emacswiki.org/emacs/?action=rc;showedit=0

    Read the article

  • DAC pack up all your troubles

    - by Tony Davis
    Visual Studio 2010, or perhaps its apparently-forthcoming sister, "SQL Studio", is being geared up to become the natural way for developers to create databases. Central to this drive is the introduction of 'data-tier application components', or DACs. Applications are developed as normal but when it comes to deployment, instead of supplying the DBA with a bunch of scripts to create the required database objects, the developer creates a single DAC Package ("DAC Pack"); a zipped XML file containing all the database objects needed by the application, along with versioning information, policies for deployment, and so on. It's an intriguing prospect. Developers can work on their development database using their existing tools and source control, and then package up the changes into a single DACPAC for deployment and management. DBAs get an "application level view" of how their instances are being used and the ability to collectively, rather than individually, manage the objects. The DBA needing to manage a large number of relatively small databases can use "DAC snapshots" to get a quick overview of what has changed across all the databases they manage. The reason that DAC packs haven't caused more excitement is that they can only be pushed to SQL Server 2008 R2, and they must be developed or inspected using Visual Studio 2010. Furthermore, what we see right now in VS2010 is more of a 'work-in-progress' or 'vision of the future', with serious shortcomings and restrictions that render it unsuitable for anything but small 'non-critical' departmental databases. The first problem is that DAC packs support a limited set of schema objects (corresponding closely to the features available on 'Azure'). This means that Service Broker queues, CLR Objects, and perhaps most critically security (permissions, certificates etc.), are off-limits. Applications that require these objects will need to add them via a post-deployment TSQL script, rather defeating the whole idea. More worrying still is the process for altering a database with a DAC pack. The grand 'collective' philosophy, whereby a single XML file can be used for deploying and managing builds and changes, extends, unfortunately, to database upgrades. Any change to a database object will result in the creation of a new database, copying the data from the old version, nuking the previous one, and then renaming the new one. Simple eh? The problem is that even something as trivial as adding a comment to a stored procedure in a 5GB database will require the server to find at least twice as much space, as well sufficient elbow-room in the transaction log for copying the largest table. Of course, you'll need to take the database offline for the full course of the deployment, which is likely to take a long time if there is a lot of data. This upgrade/rename process breaks the log chain, makes any subsequent full restore operation highly complicated, and will also break log shipping. As with any grand vision, the devil is always in the detail. It's hard to fathom why Microsoft hasn't used a SQL Compare-style approach to the upgrade process, altering a database with a change script, and this will surely be adopted in the near future. Something had to be in place for VS2010, but right now DAC packs only make sense for Azure. For this, they're cute, but hardly compelling. Nevertheless, DBAs would do well to get familiar with VS 2010 and DAC packs. Like it or not, they're both coming. Cheers, Tony.

    Read the article

  • ubuntu apache2 start, stop, status, restart and reload commands fails

    - by Assil
    Hi, I am new to Ubuntu and I'm still trying to figure this OS out (for work it's brilliant, better than Windows). Before I wrote this question, I did my research via Google, this website and even in StackOverflow.com. Whatever error came up, I googled it but with no success on how to solve this. Back to the main point: I tried to install (lamp) apache2 with this guide (in German) and this one(in english). Then I got stuck at the start command (with which one is able to start the apache2 server), my first try to run the server was a success until i wrote sudo /etc/init.d/apache2 reload then it showed me an error: $ sudo /etc/init.d/apache2 reload * Starting web server apache2 Syntax error on line 1 of /etc/apache2/ports.conf: Invalid command 'oder', perhaps misspelled or defined by a module not included in the server configuration Action 'reload' failed. The Apache error log may have more information. [fail] I didn't even write the word "oder" so I closed the shell and opened it anew. It showed the same error. After that, I've done some research and found out that my file (index.html (which is in my /var/www folder)), which I access via the browser, when I type in http://localhost should show up and tell that it was a success. So I removed apache2 and installed it again but now the following errors appear: $ sudo /etc/init.d/apache2 start * Starting web server apache2 Syntax error on line 1 of /etc/apache2/ports.conf: Invalid command 'oder', perhaps misspelled or defined by a module not included in the server configuration Action 'start' failed. The Apache error log may have more information. [fail] And still, I didn't even write "oder" in the command. I appreciate every help I can get, further thanks and have a nice day.

    Read the article

  • Unattended grub configuration after kernel upgrade

    - by bouke
    Today I have been working on automatic deployment of an ubuntu server. I got stuck on automatic updating of the server using apt-get upgrade trying to upgrade to a new kernel. The log looks like this: Setting up linux-image-3.2.0-24-generic (3.2.0-24.39) ... Testing for an existing GRUB menu.lst file ... found: /boot/grub/menu.lst (...) Then a question is presented: Package configuration +---------------------------------¦ +---------------------------------+ ¦ A new version of /boot/grub/menu.lst is available, but the version ¦ ¦ installed currently has been locally modified. ¦ ¦ ¦ ¦ What would you like to do about menu.lst? ¦ ¦ ¦ ¦ install the package maintainer's version ¦ ¦ keep the local version currently installed ¦ ¦ show the differences between the versions ¦ ¦ show a side-by-side difference between the versions ¦ ¦ show a 3-way difference between available versions ¦ ¦ do a 3-way merge between available versions (experimental) ¦ ¦ start a new shell to examine the situation ¦ ¦ ¦ ¦ ¦ ¦ <Ok> ¦ ¦ ¦ +----------------------------------------------------------------------+ The desired outcome would be to select the first option and to continue: Replacing config file /run/grub/menu.lst with new version Updating /boot/grub/menu.lst ... done After running the upgrade by hand, I used debconf-get-selections to inspect the correct answer for the question (see other settings). It seems like update_grub_changeprompt_threeway is the question that should be answered. However, setting this using debconf-set-selections presented me with the same question: debconf-set-selections <<< "grub grub/update_grub_changeprompt_threeway select install_new" apt-get -y dist-upgrade How can this question be automated?

    Read the article

  • Crontab -e gives me error messages

    - by DNA
    I get a bunch of error messages when I run crontab -e Here are the error messages. And here is my crontab file under `/usr/bin/': # /etc/crontab: system-wide crontab # Unlike any other crontab you don't have to run the `crontab' # command to install the new version when you edit this file # and files in /etc/cron.d. These files also have username fields, # that none of the other crontabs do. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) 30 * * * * root rsync /home/dnaneet/Downloads/*.pdf /home/dnaneet/Downloads/pdfs/ # I notice that the last task ('rsync') NEVER RUNS! Why is this happening? What did I do wrong? Running Ubuntu 11.10/Bash. I have read this... Am I missing a shebang? And I don't know if my anacron jobs run. Edit 1 In light of Masi's comment, I commented out lines 17 thru 25 of my crontab file with #. Now when I run sudo crontab -e, all I get is: /usr/bin/crontab: 11: 17: not found /usr/bin/crontab: 12: 25: not found (gedit:4301): Gtk-WARNING **: Attempting to store changes into `/root/.local/share/recently-used.xbel', but failed: Failed to create file '/root/.local/share/recently-used.xbel.GOHVBW': No such file or directory (gedit:4301): Gtk-WARNING **: Attempting to set the permissions of `/root/.local/share/recently-used.xbel', but failed: No such file or directory What in the world?

    Read the article

< Previous Page | 568 569 570 571 572 573 574 575 576 577 578 579  | Next Page >