Search Results

Search found 39751 results on 1591 pages for 'add'.

Page 628/1591 | < Previous Page | 624 625 626 627 628 629 630 631 632 633 634 635  | Next Page >

  • Change from tri-boot to dual-boot

    - by Andrew Robinson
    I have been tri-booting Windows 7, Windows 8 Release Candidate and Ubuntu 12.04 LTS for a few months now. I have decided that, since I have no touch screen, I will not purchase Win 8. I now want to get rid of the Win 8 RC, then add that partition space to my Ubuntu partition, but have no idea how to accomplish this. Do I need to uninstall Win 8 RC from within Windows first? The grub loader sends me to the Win 8 loader, where I have Win 7 as the default. Does that complicate things? Any assistance anyone can give would be greatly appreciated.

    Read the article

  • How to get exfat working in Ubuntu 13.10?

    - by davorao
    In Ubuntu 13.04 there was an option to use the PPA by Relan in order to get ExFat functionality. Seeing that in Ubuntu 13.10 this functionality is available from the repositories without the PPA I tried this but failed to make it work. Trying the PPA in Ubuntu 13.10 does not yield results either, as it complains about missing the fuse-utils package. This package seems to have been removed (https://launchpad.net/ubuntu/saucy/amd64/fuse-utils). So my question is, did the process of using ExFat change since the jump from Ubuntu 13.04 to 13.10 and how do I enable it? Old Way sudo apt-add-repository ppa:relan/exfat sudo apt-get update sudo apt-get install exfat-fuse exfat-utils Tried now via Ubuntu Software Center sudo apt-get install exfat-utils exfat-fuse

    Read the article

  • Coordinating team code review sessions

    - by Wade Tandy
    My question has two parts: 1) In your team or organization, do you ever do in-person code reviews with all or part of a team, as opposed to online reviews using some sort of tool? 2) How do you structure these meetings? Do you choose to focus on one person's code in a given meeting? Do you look at everything? Take a random sample? Ask people on the team what they'd like to have looked at of theirs? I'd love to add this practice to my development team, so I'd like to hear how others are doing it.

    Read the article

  • Why use link classes in oql instead of classes that contain links

    - by Isaac
    itop abstracts its very complex database design with an object query language (oql). For this there are classes definded, like 'Ticket' and 'Server'. Now a Ticket usually is linked to a Server. In my naive way I would give the Ticket class an attribute 'affected_server_list', where I could reference the affected servers. itop does it different: neither Servers nor Tickets know of each other. Instead there is a class 'linkTicketToServer', which provides the link between the two. The first thing I noticed is that it makes oql queries more complex. So I wondered why they designed it this way. One thing that occured to me is that it allows for more flexiblity, in that I can add links without modifying the original classes. Is this allready why one would implement it this way, or are there other reasons for this kind of design?

    Read the article

  • ISO checksum menu in nautilus

    - by dellphi
    I'm using Ubuntu 11.10 x64, Unity, and Nautilus 3.2.1 I want to be able to perform checksum ISO in Nautilus. I have searched and found: http://www.addictivetips.com/ubuntu-linux-tips/nautilus-actions-extra-add-more-features-to-ubuntu-context-menu/ and perform the procedures written in it. Once completed, the other menus appear, except the menu for "checksum verification". Then, when I run gksu nautilus-actions-config-tool I just asked to enter a user password, and .... nothing happens. What should I do, to bring up the "checksum verification" menu? I am very grateful for any response or advice given.

    Read the article

  • Integrating Windows Form Click Once Application into SharePoint 2007 &ndash; Part 2 of 4

    - by Kelly Jones
    In my last post, I explained why we decided to use a Click Once application to solve our business problem. To quickly review, we needed a way for our business users to upload documents to a SharePoint 2007 document library in mass, set the meta data, set the permissions per document, and to do so easily. Let’s look at the pieces that make up our solution.  First, we have the Windows Form application.  This app is deployed using Click Once and calls SharePoint web services in order to upload files and then calls web services to set the meta data (SharePoint columns and permissions).  Second, we have a custom action.  The custom action is responsible for providing our users a link that will launch the Windows app, as well as passing values to it via the query string.  And lastly, we have the web services that the Windows Form application calls.  For our solution, we used both out of the box web services and a custom web service in order to set the column values in the document library as well as the permissions on the documents. Now, let’s look at the technical details of each of these pieces.  (All of the code is downloadable from here: )   Windows Form application deployed via Click Once The Windows Form application, called “Custom Upload”, has just a few classes in it: Custom Upload -- the form FileList.xsd -- the dataset used to track the names of the files and their meta data values SharePointUpload -- this class handles uploading the file SharePointUpload uses an HttpWebRequest to transfer the file to the web server. We had to change this code from a WebClient object to the HttpWebRequest object, because we needed to be able to set the time out value.  public bool UploadDocument(string localFilename, string remoteFilename) { bool result = true; //Need to use an HttpWebRequest object instead of a WebClient object // so we can set the timeout (WebClient doesn't allow you to set the timeout!) HttpWebRequest req = (HttpWebRequest)WebRequest.Create(remoteFilename); try { req.Method = "PUT"; req.Timeout = 60 * 1000; //convert seconds to milliseconds req.AllowWriteStreamBuffering = true; req.Credentials = System.Net.CredentialCache.DefaultCredentials; req.SendChunked = false; req.KeepAlive = true; Stream reqStream = req.GetRequestStream(); FileStream rdr = new FileStream(localFilename, FileMode.Open, FileAccess.Read); byte[] inData = new byte[4096]; int bytesRead = rdr.Read(inData, 0, inData.Length); while (bytesRead > 0) { reqStream.Write(inData, 0, bytesRead); bytesRead = rdr.Read(inData, 0, inData.Length); } reqStream.Close(); rdr.Close(); System.Net.HttpWebResponse response = (HttpWebResponse)req.GetResponse(); if (response.StatusCode != HttpStatusCode.OK && response.StatusCode != HttpStatusCode.Created) { String msg = String.Format("An error occurred while uploading this file: {0}\n\nError response code: {1}", System.IO.Path.GetFileName(localFilename), response.StatusCode.ToString()); LogWarning(msg, "2ACFFCCA-59BA-40c8-A9AB-05FA3331D223"); result = false; } } catch (Exception ex) { LogException(ex, "{E9D62A93-D298-470d-A6BA-19AAB237978A}"); result = false; } return result; } The class also contains the LogException() and LogWarning() methods. When the application is launched, it parses the query string for some initial values.  The query string looks like this: string queryString = "Srv=clickonce&Sec=N&Doc=DMI&SiteName=&Speed=128000&Max=50"; This Srv is the path to the server (my Virtual Machine is name “clickonce”), the Sec is short for security – meaning HTTPS or HTTP, the Doc is the shortcut for which document library to use, and SiteName is the name of the SharePoint site.  Speed is used to calculate an estimate for download speed for each file.  We added this so our users uploading documents would realize how long it might take for clients in remote locations (using slow WAN connections) to download the documents. The last value, Max, is the maximum size that the SharePoint site will allow documents to be.  This allowed us to give users a warning that a file is too large before we even attempt to upload it. Another critical piece is the meta data collection.  We organized our site using SharePoint content types, so when the app loads, it gets a list of the document library’s content types.  The user then select one of the content types from the drop down list, and then we query SharePoint to get a list of the fields that make up that content type.  We used both an out of the box web service, and one that we custom built, in order to get these values. Once we have the content type fields, we then add controls to the form.  Which type of control we add depends on the data type of the field.  (DateTime pickers for date/time fields, etc)  We didn’t write code to cover every data type, since we were working with a limited set of content types and field data types. Here’s a screen shot of the Form, before and after someone has selected the content types and our code has added the custom controls:     The other piece of meta data we collect is the in the upper right corner of the app, “Users with access”.  This box lists the different SharePoint Groups that we have set up and by checking the boxes, the user can set the permissions on the uploaded documents. All of this meta data is collected and submitted to our custom web service, which then sets the values on the documents on the list.  We’ll look at these web services in a future post. In the next post, we’ll walk through the Custom Action we built.

    Read the article

  • Skinning with DotNetNuke 5 Super Stylesheets Layouts - 12 Videos

    In this video tutorial we demonstrate how to use Super Stylesheets in DotNetNuke for quickly and easily designing the layout of your DotNetNuke skin.Super Stylesheets are ideal for both beginner and experienced skin designers, the advantage of Super Stylesheets is that you can easily create a skin layout which works in all browsers without the need to learn complex CSS techniques.We show you how to build a skin from the very beginning using Super Stylesheets.The videos contain:Video 1 - Introduction to the Super Stylesheets DNN Layouts and Initial SetupVideo 2 - Setting Up the Skin Layout Template CodeVideo 3 - Using the ThreeCol-Portal Layout Template for a SkinVideo 4 - How to Add Tokens to the SkinVideo 5 - Setting Background Colors for Content Panes and Creating CSS ContainersVideo 6 - How to Create a Footer Area and Reset the Default StylesVideo 7 - How to Style the Text in the Content, Left and Right PanesVideo 8 - SEO Skin Layouts for DotNetNuke TokensVideo 9 - Creating Several Skin Layouts Using the Layout TemplatesVideo 10 - Further Layout Templates and MultiLayout TemplatesVideo 11 - SEO Layout Template SkinsVideo 12 - Final SEO Positioning of the Skin CodeSkinning with DotNetNuke Super Stylesheets - DNN Layouts Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL SERVER – Enable PowerPivot Plugin in Excel

    - by pinaldave
    Recently I had interesting experience at one conference. My PowerPivot plugin got disabled and I had no clue how to enable the same. After while, I figured out how to enable the same. Once I got back from the event, I searched online and realize that many other people online are facing the same problem. Here is how I solved the problem. When I started Excel it did not load PowerPivot plugin. I found in option>> Add in the plug in to be disabled. I enabled the plugin and it worked very well. Let us see that with images. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: PowerPivot

    Read the article

  • Using Java classes in JRuby

    - by kerry
    I needed to do some reflection today so I fired up interactive JRuby (jirb) to inspect a jar.  Surprisingly, I couldn’t remember exactly how to use Java classes in JRuby.  So I did some searching on the internet and found this to be a common question with many answers.  So I figure I will document it here in case I forget how in the future. Add it’s folder to the load path, require it, then use it! $: << '/path/to/my/jars/' require 'myjar' # so we don't have to reference it absolutely every time (optional) include Java::com.goodercode my_object = SomeClass.new

    Read the article

  • My session at the Vancouver Silverlight User Group

    - by pluginbaby
    Next week I will be in Vancouver and talk at the local User Group: the Vancouver Silverlight User Group. Title: HTML5 and Silverlight 5: facts, assumptions and near future Abstract: In this session, I will try to clarify what we hear (and not hear) around these technologies, maybe add a few guess on their role in Windows 8... as well as presenting a technical comparison between HTML5 and Silverlight 5: HTML vs XAML, tools, languages, databinding, performance, etc. Date: Wednesday, July 6, 2011 Thanks Telerik to sponsor the room for this event. More details and registration: http://www.meetup.com/Vancouver-Silverlight-User-Group/events/22849231/

    Read the article

  • Object-Oriented equivalent of LISP's progn function?

    - by Archer
    I'm currently writing a LISP parser that iterates through some AutoLISP code and does its best to make it a little easier to read (changing prefix notation to infix notation, changing setq assignments to "=" assignments, etc.) for those that aren't used to LISP code/only learned object oriented programming. While writing commands that LISP uses to add to a "library" of LISP commands, I came across the LISP command "progn". The only problem is that it looks like progn is simply executing code in a specific order and sometimes (not usually) assigning the last value to a variable. Am I incorrect in assuming that for translating progn to object-oriented understanding that I can simply forgo the progn function and print the statements that it contains? If not, what would be a good equivalent for progn in an object-oriented language?

    Read the article

  • security cameras and Ubuntu

    - by Josh
    I am setting up a server of sorts and chose ubuntu for the OS as my dad has it on a few computers. I am unimpressed with Windows or MAC due to all the add-ons and complexity of it when all I want is something simple. The system will have 3 purposes, storing my wife's photography work (she is a professional photographer) storing music for quick access to our entertainment system (will be running the system thru the tv in our living room and thus though our surround sound) and will also serve as a DVR unit for a home security system I am going to put together. My question is what sort of software options are there for the Ubuntu system as far as a DVR with frame by frame playback. It does not need to be fancy but of course a variety of options are a nice touch.

    Read the article

  • Writing an ASP.Net Web based TFS Client

    - by Glav
    So one of the things I needed to do was write an ASP.Net MVC based application for our senior execs to manage a set of arbitrary attributes against stories, bugs etc to be able to attribute whether the item was related to Research and Development, and if so, what kind. We are using TFS Azure and don’t have the option of custom templates. I have decided on using a string based field within the template that is not very visible and which we don’t use to write a small set of custom which will determine the research and development association. However, this string munging on the field is not very user friendly so we need a simple tool that can display attributes against items in a simple dropdown list or something similar. Enter a custom web app that accesses our TFS items in Azure (Note: We are also using Visual Studio 2012) Now TFS Azure uses your Live ID and it is not really possible to easily do this in a server based app where no interaction is available. Even if you capture the Live ID credentials yourself and try to submit them to TFS Azure, it wont work. Bottom line is that it is not straightforward nor obvious what you have to do. In fact, it is a real pain to find and there are some answers out there which don’t appear to be answers at all given they didn’t work in my scenario. So for anyone else who wants to do this, here is a simple breakdown on what you have to do: Go here and get the “TFS Service Credential Viewer”. Install it, run it and connect to your TFS instance in azure and create a service account. Note the username and password exactly as it presents it to you. This is the magic identity that will allow unattended, programmatic access. Without this step, don’t bother trying to do anything else. In your MVC app, reference the following assemblies from “C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\ReferenceAssemblies\v2.0”: Microsoft.TeamFoundation.Client.dll Microsoft.TeamFoundation.Common.dll Microsoft.TeamFoundation.VersionControl.Client.dll Microsoft.TeamFoundation.VersionControl.Common.dll Microsoft.TeamFoundation.WorkItemTracking.Client.DataStoreLoader.dll Microsoft.TeamFoundation.WorkItemTracking.Client.dll Microsoft.TeamFoundation.WorkItemTracking.Common.dll If hosting this in Internet Information Server, for the application pool this app runs under, you will need to enable 32 Bit support. You also have to allow the TFS client assemblies to store a cache of files on your system. If you don’t do this, you will authenticate fine, but then get an exception saying that it is unable to access the cache at some directory path when you query work items. You can set this up by adding the following to your web.config, in the <appSettings> element as shown below: <appSettings> <!-- Add reference to TFS Client Cache --> <add key="WorkItemTrackingCacheRoot" value="C:\windows\temp" /> </appSettings> With all that in place, you can write the following code: var token = new Microsoft.TeamFoundation.Client.SimpleWebTokenCredential("{you-service-account-name", "{your-service-acct-password}"); var clientCreds = new Microsoft.TeamFoundation.Client.TfsClientCredentials(token); var currentCollection = new TfsTeamProjectCollection(new Uri(“https://{yourdomain}.visualstudio.com/defaultcollection”), clientCreds); TfsConfigurationServercurrentCollection.EnsureAuthenticated(); In the above code, not the URL contains the “defaultcollection” at the end of the URL. Obviously replace {yourdomain} with whatever is defined for your TFS in Azure instance. In addition, make sure the service user account and password that was generated in the first step is substituted in here. Note: If something is not right, the “EnsureAuthenticated()” call will throw an exception with the message being you are not authorised. If you forget the “defaultcollection” on the URL, it will still fail but with a message saying you are not authorised. That is, a similar but different exception message. And that is it. You can then query the collection using something like: var service = currentCollection.GetService<WorkItemStore>(); var proj = service.Projects[0]; var allQueries = proj.StoredQueries; for (int qcnt = 0; qcnt < allQueries.Count; qcnt++) {     var query = allQueries[qcnt];     var queryDesc = string.format(“Query found named: {0}”,query.Name); } You get the idea. If you search around, you will find references to the ServiceIdentityCredentialProvider which is referenced in this article. I had no luck with this method and it all looked too hard since it required an extra KB article and other magic sauce. So I hope that helps. This article certainly would have helped me save a boat load of time and frustration.

    Read the article

  • Removing existing filtered pages from Google's index: noindex / 301 / canonical to non-filtered page?

    - by Noam
    I've decided to remove some of my site's pages from the Google index to focus more of the indexed pages on higher quality pages. The pages I'm going to remove are already in the index. These removed pages are filtered pages which will continue to exist, I just don't want them in the google index because they add little quality to the same page without any filter selected. I've added in webmaster tools specification of narrow for the parameters that set these filters, but it doesn't seem this changes anything in how he handles these pages. So I'm considering three options: Adding <meta name="robots" content="noindex" /> to the html header of these filtered pages 301 to the non-filtered page that contains the most similar information and will remain in the index Canonical tag. Which I'm not sure is exactly the mainstream use case, as these aren't really the same pages. Which should I use?

    Read the article

  • I cannot update my version 12-04. After installation of 12.04 I get only error reports when I try to update

    - by cees groenewoud
    received report: from Google translate : Could not initialize the package information There was an insoluble problem occurred while initializing the package information. Please this error in the package "update-manager" to report and add the following message to: 'E: Encountered a section with no Package: header E: Problem with Merge List / var/lib/apt/lists/nl.archive.ubuntu.com_ubuntu_dists_precise_main_i18n_Translation-en, E: The package lists or status file could not decompose or not be opened. " Kon de pakketinformatie niet initialiseren Er heeft zich een onoplosbaar probleem voorgedaan bij het initialiseren van de pakketinformatie. Gelieve deze fout in het pakket ‘update-manager’ te rapporteren en voeg de volgende foutmelding toe: 'E:Encountered a section with no Package: header, E:Problem with MergeList /var/lib/apt/lists/nl.archive.ubuntu.com_ubuntu_dists_precise_main_i18n_Translation-en, E:De pakketlijsten of het statusbestand konden of niet ontleed, of niet geopend worden.'

    Read the article

  • How should modules access data outside their scope?

    - by Joe
    I run into this same problem quite often. First, I create a namespace and then add modules to this namespace. Then issue I always run into is how best to initialize the application? Naturally, each module has its own startup procedure so should this data(not code in some cases, just a list of items to run) stay with the module? Or should there be a startup procedure in the global namespace which has the startup data for ALL the modules. Which is the more robust way of organizing this situation? Should some things be made centralized or should there be strict adherence to modules encapsulating everything about themselves? Though this is a general architecture questions, Javascript centric answers would be really appreciated!

    Read the article

  • SQL Server 2008 Service Pack 1 and the Invoke or BeginInvoke cannot be called error message

    - by Jeff Widmer
    When trying to install SQL Server 2008 Service Pack 1 to a SQL Server 2008 instance that is running on a virtual machine, the installer will start:   But then after about 20 seconds I receive the following error message: TITLE: SQL Server Setup failure. ----------------------------- SQL Server Setup has encountered the following error: Invoke or BeginInvoke cannot be called on a control until the window handle has been created. ------------------------------ BUTTONS: OK ------------------------------ Searching for this issue I found that several people have the same problem and there is no clear solution.  Some had success with closing windows or Internet Explorer but that didn’t work for me; what did work is to make sure the SQL Server 2008 “Please wait while SQL Server 2008 Setup processes the current operation.” dialog is selected and has the focus when it first shows up.  Selected (with the current focus) it looks like this:   Without focus the dialog looks like this: Add a comment if you find out any information about how to consistently get around this issue or why it is happening in the first place.

    Read the article

  • How to import in BIDS more than one SSIS package in one shot!

    - by Luca Zavarella
    Have you ever wanted to add more than one Integration Services existing package (e.g. 20 packages) in a SSIS project? Well, you may suppose that an Open Dialog supports multiple files selection to import more than one file at a time ... BIDS Open Dialog doesn’t allow this, you can just select a single file! Hence the loss of valuable time spent to import the packages one at a time. Few days ago I learned a trick that solves the problem, thanks to this post by Matt Masson. Just copy all the packages to import from Windows Explorer (Ctrl + C): Then just right click on the SSIS Packages folder of the Integration Services project and make a simple Past (CTRL + V): So “auto-magically” you’ll have all those packages imported in your Integration Services project!! What can I say... this feature was well hidden!

    Read the article

  • How do I inject test objects when the real objects are created dynamically?

    - by JW01
    I want to make a class testable using dependency injection. But the class creates multiple objects at runtime, and passes different values to their constructor. Here's a simplified example: public abstract class Validator { private ErrorList errors; public abstract void validate(); public void addError(String text) { errors.add( new ValidationError(text)); } public int getNumErrors() { return errors.count() } } public class AgeValidator extends Validator { public void validate() { addError("first name invalid"); addError("last name invalid"); } } (There are many other subclasses of Validator.) What's the best way to change this, so I can inject a fake object instead of ValidationError? I can create an AbstractValidationErrorFactory, and inject the factory instead. This would work, but it seems like I'll end up creating tons of little factories and factory interfaces, for every dependency of this sort. Is there a better way?

    Read the article

  • Handling packet impersonating in client-server model online game

    - by TheDespite
    I am designing a server-client model game library/engine. How do I, and should I even bother to handle frequent update packet possible impersonating? In my current design anyone could copy a packet from someone else and modify it to execute any non-critical action for another client. I am currently compressing all datagrams so that adds just a tad of security. Edit: One way I thought about was to send a unique "key" to the verified client every x_time and then the client has to add that to all of it's update packets until a new key is sent. Edit2: I should have mentioned that I am not concerned about whether the actions described in the packet are available to the client at the time, this is all checked by the server which I thought was obvious. I am only concerned about someone sending packets for another client.

    Read the article

  • Microsoft Codename Dallas

    - by kaleidoscope
    Dallas is Microsoft’s Information Service offering which allows developers and information workers to find, acquire and consume published datasets and web services. Users subscribe to datasets and web services of interest and can integrate the information into their own applications via a standardized set of API’s. Data can also be analyzed online using the Dallas Service Explorer or externally using the Power Pivot Add-In for Excel. We can explore all the datasets and subscribe to the catalog for using the data. Dallas Developer Portal https://www.sqlazureservices.com More information can be found at:      http://channel9.msdn.com/learn/courses/Azure/Dallas/IntroToDallas/Overview/   Lokesh, M

    Read the article

  • How do I customize desktop wallpaper slideshow?

    - by Pithikos
    I spent some time and tried varioues things but nothing works. Here's what I have tried so far(changing the slideshow manually): Making a new folder /usr/share/backgrounds/mywallpapers and add my own background-1.xml in there. Copying a bunch of my own wallpaper files into /usr/share/backgrounds/ Copy /usr/share/backgrounds/Contest/background-1.xml to /usr/share/backgrounds/ I logged out and in and still no changes in Appearance app. I have heard about Wallch but I don't want some app running in the background all the time. I'm not even sure Wallch will work with Gnome 3. I also tryied gnome-3-wp (Gnome 3 Wallpaper Slideshow app) but it just seems broken for Oneiric Ubuntu 11.10. Anyone has a solution?

    Read the article

  • How Easy is it to Code In-Built Videos?

    - by Alan Parker
    First time poster so please don't bite my head off. Basically, I'm having a site built for me and I don't really know anything about coding but I'm not too sure if I trust my web developer. I asked him recently about adding a feature where I could display built-in videos like the following page - http://www.ejot.co.uk/buildingfasteners.odl and he quoted me quite a high amount for it. I just wanted to double check with you guys whether this is a difficult feature to add in and whether it justifies a reasonable amount of money on top of what I'm already paying him. Thanks in advance for your help, Alan

    Read the article

  • How can I create a solid business case for upgrading our programmers to 256 GB SSD and 16 GB of RAM?

    - by Alex. S.
    We have an environment based on Microsoft stack (VS2010, SQL Server, etc), and I firmly believe that we could improve productivity a little bit, having more RAM and a faster secondary SSD. What data do you advice to gather so I can solidify my request in such a way the advantages can be unbiasedly demonstrated? Currently we have only 6GB of RAM and slower HD drives, and at home I have a 128 GB SSD in my desktop and 16 GB of RAM (I also think is the max amount of memory supported by our workstations, if we could go bigger then better), so I can feel the difference and it's real. I also want to add that we are in an industry with plenty of money, so the issue actually is how to get a budget approval from management and spend it wisely to increase productivity.

    Read the article

  • Script at Startup

    - by OttoRobba
    I'm using 10.10 and I need to run a script in order to get a windows-like international keyboard layout - basically, it changes how dead keys work. (Original script from this page http://t.tam.atbh.us/en/win-us-intl-4-linux/ ) Since I can't seem to manage to get it going from boot, I have to run a custom script to launch any application. The script: export GTK_IM_MODULE=xim setxkbmap us intl xmodmap -e 'keycode 48 = dead_acute dead_diaeresis dead_acute dead_diaeresis acute diaeresis' application_name So if I put abiword in the application_name, it runs abiword respecting the keyboard script. Ideally, the original script would start at boot and then any applications I use would function with it - just like what happens if I run it first in Terminal (without the app_name line) and then run apps from it. I tried to make the script run from boot by adding it to /etc/rc.local but to no avail. Tried to add it to init.d but that also didn't work. If anyone can help, I'd be most grateful.

    Read the article

< Previous Page | 624 625 626 627 628 629 630 631 632 633 634 635  | Next Page >