Search Results

Search found 1063 results on 43 pages for 'alternate'.

Page 1/43 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • 11.10 AMD64 alternate installer has broken packages?

    - by Ibrahim
    I'm installing Ubuntu 11.10 from the alternate install ISO because I need to use LVM. Unfortunately, at some point the installer fails because it can't install libpurple0 and ubuntu-desktop because they depend on libsasl2-modules but it's not installable somehow. It also has the same error for xserver-xorg-video-all but I think I could probably live without that one. Kind of annoying that this is broken, I'm guessing maybe if I had internet it would work but right now I'm on a campus network with a captive portal so I can't actually get a network connection without using a browser to log in. Just thought someone should know or maybe I'm doing something wrong. I'm going to try installing 11.04 alternate and then upgrading I guess.

    Read the article

  • Can't install alternate CD from USB?

    - by mattias
    Hi im trying to install ubuntu 12.04 with full hard disk encryption. After downloading and installing the Ubuntu live CD, I learned that truecrypt doesnt support full disk encryption on linux. I also learned that the best way to get "nearly full disk encryption" on ubuntu is by installing it from the alternate install CD. I tried that, but something is wrong with my CD reader/burner so it doesnt boot up when i insert the cd. My thought here was to take the .iso that I downloaded on my unencrypted Ubuntu system, use Unetbootin to make the usb drive. The usb drive used for this is exactly the same brand as one that I know has worked with a previous ubuntu live system on the same computer. I also used unetbootin for that usb, but I created it from windows that time. The usb stick boots up fine and i get through the first couple of steps in the installation process. However, After a while I get a "box" with the following error message "Load Installer components from CD" There was a problem reading data from the CD-ROM. Please make sure it is in the drive. If retrying does not work, you should check the integrity of your CD-ROM. "Failed to copy file from CD-ROM. Retry? " Then I cant get any further. I googled a lot and found this page which seems to tackle this very problem: http://www.dotkam.com/2010/11/29/ins...mage-from-usb/ I tried to do what it said. After pressing TAB, I wrote : cdrom-detect/try-usb=true without quotes because that's what i think is right. When I press TAB, there already is a text saying : /ubnkern initrd=/ubninit vga=788 -- quiet which can be removed. I have tried to both delete the text before the "--" and just inserting cdrom-detect/try-usb=true before it. Any idea of what can be wrong? I would like to do a full system encryption, or as full as it is possible. I dont want to just encrypt my /home folder. Maybe this isn't the easiest way. I use SanDisk usb sticks. I know there is a problem with U3 launcher on some SanDisks, but I never had to remove U3 before from similar disks, and the alternate install does boot up, so I dont think using U3 removal would help me. Any help or indication to an easier way to do this would be appreciated

    Read the article

  • No root file system - Alternate CD + LVM

    - by Carlos
    I am trying to install 11.10 as dual boot with Windows 7. I have all partitioned well as you can see here: http://www.flickr.com/photos/42897978@N00/7111180385/ I burned the Alternate CD ISO to a CD. Boot from it and followed instructions to Partitioning. There, I configured the LVM partitions as follows: Volume Group ubuntu-vg - Uses Physical Volume /dev/sda7 380GB - Provides Logical Volume home-lv 60GB - Provides Logical Volume root-lv 60GB - Provides Logical Volume swap-lv 6GB That is all I want (note that my /boot is outside of LVM) Then when I say that all is Ok and to write it to disk and continue with the installation, I get the following error. !! Partition Disks No root file system No root file system is defined Please correct this from the partitioning menu. What should I fix and how? I tried issuing the "Revert changes to partitions", but nothing happens. It seems that the LVM configuration has already been written to the CD. HELP!!

    Read the article

  • Where is the alternate installation ISO for Ubuntu Saucy?

    - by Leon Nardella
    I would like to download the alternate installation ISO for the current release, Ubuntu 13.10 Saucy Salamander, but I can't find it on the releases.ubuntu.com website. Aren't alternate ISOs provided anymore? I need the alternate ISO because Xorg crashes on recent versions of Ubuntu when using old SIS on board graphics. In fact, I just patched Ubuntu sources and generated a custom package for the xserver-xorg-video-sis driver on my PPA to try fix this problem, after installing with the alternate ISO.

    Read the article

  • Why does integrity check fail for the 12.04.1 Alternate ISO?

    - by mghg
    I have followed various recommendations from the Ubuntu Documentation to create a bootable Ubuntu USB flash drive using the 12.04.1 Alternate install ISO-file for 64-bit PC. But the integrity test of the USB stick has failed and I do not see why. These are the steps I have made: Download of the 12.04.1 Alternate install ISO-file for 64-bit PC (ubuntu-12.04.1-alternate-amd64.iso) from http://releases.ubuntu.com/12.04.1/, as well as the MD5, SHA-1 and SHA-256 hash files and related PGP signatures Verification of the data integrity of the ISO-file using the MD5, SHA-1 and SHA-256 hash files, after having verified the hash files using the related PGP signature files (see e.g. https://help.ubuntu.com/community/HowToSHA256SUM and https://help.ubuntu.com/community/VerifyIsoHowto) Creation of a bootable USB stick using Ubuntu's Startup Disk Creator program (see http://www.ubuntu.com/download/help/create-a-usb-stick-on-ubuntu) Boot of my computer using the newly made 12.04.1 Alternate install on USB stick Selection of the option "Check disc for defects" (see https://help.ubuntu.com/community/Installation/CDIntegrityCheck) Steps 1, 2, 3 and 4 went without any problem or error messages. However, step 5 ended with an error message entitled "Integrity test failed" and with the following content: The ./install/netboot/ubuntu-installer/amd64/pxelinux.cfg/default file failed the MD5 checksum verification. Your CD-ROM or this file may have been corrupted. I have experienced the same (might only be similar since I have no exact notes) error message in previous attempts using the 12.04 (i.e. not the maintenance release) Alternate install ISO-file. I have in these cases tried to install anyway and have so far not experienced any problems to my knowledge. Is failed integrity check described above a serious error? What is the solution? Or can it be ignored without further problems?

    Read the article

  • Is the Alternate Ubuntu installer still required for LVM or Software RAID setup?

    - by jimp
    Over the past 5 years, I have been setting up Ubuntu servers using the Alternate installer. I need to provision a new server today, and I'm curious if the Alternate CD is still the only way to setup LVM/RAID at installation time. I'm my limited experience with Red Hat Enterprise Linux, I noticed it's single installer configures LVM automatically. Has Ubuntu's installer, at least the standard "Server" installer, added support for LVM/RAID, or is the Alternate installer still required for that kind of server setup? http://mirror.anl.gov/pub/ubuntu-iso/DVDs/ubuntu/12.04.1/release/ Alternate install CD The alternate install CD allows you to perform certain specialist installations of Ubuntu. It provides for the following situations: setting up automated deployments; upgrading from older installations without network access; LVM and/or RAID partitioning; installs on systems with less than about 384MiB of RAM (although note that low-memory systems may not be able to run a full desktop environment reasonably). LVM has always been fundamental for our server needs, so I'm surprised if it is still not considered a server-worthy feature.

    Read the article

  • Google Webmaster tools Incorrect rel-alternate-hreflang implementation warning message

    - by Noam
    I'm getting this warning msg. in Google webmaster tools Incorrect rel-alternate-hreflang implementation In particular, there seems to be a problem with missing or incorrect bi-directional linking (when page A links with hreflang to page B, there must be a link back from B to A as well). This msg. seems pretty straight forward, but when checking their example pages, I'm not finding anything wrong. I'm using alternate for translation of main site menu, titles, etc.. In each page I have this: <link rel="alternate" hreflang="en" href="http://mydomain.com/page" /> <link rel="alternate" hreflang="jp" href="http://ja.mydomain.com/page" /> <link rel="alternate" hreflang="ko" href="http://ko.mydomain.com/page" /> <link rel="alternate" hreflang="th" href="http://th.mydomain.com/page" /> <link rel="alternate" hreflang="es" href="http://es.mydomain.com/page" /> <link rel="alternate" hreflang="pt" href="http://pt.mydomain.com/page" /> I've double checked this exists in all the 6 pages. This is the first time I've seen this msg although I've implemented this at least 6 months ago, and implementation hasn't changed. Is there any way to check a specific set of pages for these things? Am I missing something in my implementation? We're auto-redirecting people from a location to their specific language, and give them an option to manually change this. I've also just found out about the suggestion for Vary HTTP header - is that relevant and important here?

    Read the article

  • Alternate CD image downloaded last year gives different hash

    - by Oxwivi
    I had downloaded the alternate CD images through torrents some time last year, and now that I need it again I decided to check it's md5 hash. $ md5sum ubuntu-11.10-alternate-i386.iso b502888194367acdec4d79203e7a539c ubuntu-11.10-alternate-i386.iso Now the problem is, the reference hashes it's supposed to match to is completely different: 24da873c870d6a3dbfc17390dda52eb8 ubuntu-11.10-alternate-i386.iso Can I safely conclude the image I downloaded is corrupted? Reference UbuntuHashes - Community Ubuntu Documentation

    Read the article

  • Android ListView with alternate color and on focus color

    - by Yogesh
    I need to set alternate color in list view rows but when i do that it removes/ disables the on focus default yellow background I tried with backgroundColor rowView.setBackgroundColor(SOME COLOR); also with backgrounddrwable. rowView.setBackgroundColor(R.drawable.view_odd_row_bg); <!-- Even though these two point to the same resource, have two states so the drawable will invalidate itself when coming out of pressed state. --> <item android:state_focused="true" android:state_enabled="false" android:state_pressed="true" android:drawable="@color/highlight" /> <item android:state_focused="true" android:state_enabled="false" android:drawable="@color/highlight" /> <item android:state_focused="true" android:state_pressed="true" android:drawable="@color/highlight" /> <item android:state_focused="false" android:state_pressed="true" android:drawable="@color/highlight" /> <item android:state_focused="true" android:drawable="@color/highlight" /> but it wont work. is there any way we can set background color and on focus color simultaneously which will work.

    Read the article

  • netbook alternate installation/update

    - by Dustin
    Ok I have an Asipre One D255E netbook. Installed 9.10 sucessfully, however no internet connections to upgrade to 10.04 or 10.10. have 10.10 alternate (couldnt get 10.04). However it says that no cd-rom present (netbook via live usb), and i direct it to sdb1 but that does not work. could someone guide me to the steps to installation via alternate ubs only (& no internet). the live usb's of 10.04 & 10.10 internet connections worked, but installation hanged (non alternate). Thank you greatly in advance.

    Read the article

  • netbook alternate installation/update

    - by user11847
    I have an Asipre One D255E netbook. Installed 9.10 sucessfully, but no internet connections to upgrade to 10.04 or 10.10. Have 10.10 alternate (couldnt get 10.04). However it says that no cd-rom present (netbook via live usb), and I directed it to sdb1 but that did not work. Could someone guide me to the steps to installation via alternate ubs only (& no internet)? The live usb's of 10.04 & 10.10 internet connections worked, but installation hanged (non alternate). Thank you greatly in advance.

    Read the article

  • Does moving a file outside NTFS loose data in alternate data streams?

    - by jay
    I have a lot of files on machine running Windows Server 2008 which I wanted to move to a Fedora machine. How can I keep the attributes stored in, for example, media files (date taken, rating, length, etc) while transfering it to outside the realm of NTFS's Alternate Data Streams. I'm aware that similar metadata exists in other file systems, but what happens when you move these files? And what's the best way to retain them in other file systems?

    Read the article

  • Multi language site - use of canonical link and link rel="alternate"

    - by julia
    I keep reading everywhere that if you have a multilanguage site, where the same page appears in, say, French and English, then this is considered as duplicate content by google. It is written that using canonical link is the solution, but I do not understand how to use it in this case. Should I: Choose either French URL or English URL to be the canonical (main) one, and where I will place the canonical link? If so, how do I decide which of the two URLs must be canonical? both languages are important to me and I want the content under both languages to be indexed by google and served to the user, depending on the language in which he searches. OR should I place a canonical link on both French and English URLs? If so, then I do not understand the meaning of using the canonical link? In this case would both URLs be indexed, are both of them considered as "important" by google and not duplicates? Also I read that link rel="alternate" can be used to indicate to google that, for example the French URL is the French-language equivalent of the English page. This makes sense and I understand how to use such links, but how are they combined with canonical links? Should I define both the canonical URL AND specify rel="alternate" in both URLs? Could someone help me to clarify this, cause I'm stuck with this and can't seem to find a good-enough explanation in different sources.

    Read the article

  • Using an alternate search platform in Commerce Server 2009

    - by Lewis Benge
    Although Microsoft Commerce Server 2009's architecture is built upon Microsoft SQL Server, and has the full power of the SQL Full Text Indexing Search Platform, there are time however when you may require a richer or alternate search platform. One of these scenarios if when you want to implement a faceted (refinement) search into your site, which provides dynamic refinements based on the search results dataset. Faceted search is becoming popular in most online retail environments as a way of providing an enhanced user experience when browsing a larger catalogue. This is powerful for two reasons, firstly with a traditional search it is down to a user to think of a search term suitable for the product they are trying to find. This typically will not return similar products or help in any way to refine a larger dataset. Faceted searches on the other hand provide a comprehensive list of product properties, grouped together by similarity to help the user narrow down the results returned, as the user progressively restricts the search criteria by selecting additional criteria to search again, these facets needs to continually refresh. The whole experience allows users to explore alternate brands, price-ranges, or find products they hadn't initially thought of or where looking for in a bid to enhance cross sell in the retail environment. The second advantage of this type of search from a business perspective is also to harvest the search result to start to profile your user. Even though anonymous users may routinely visit your site, and will not necessarily register or complete a transaction to build up marketing data- profiling, you can still achieve the same result by recording search facets used within the search sequence. Below is a faceted search scenario generated from eBay using the search term "server". By creating a search profile of clicking through Computer & Networking -> Servers -> Dell - > New and recording this information against my user profile you can start to predict with a lot more certainty what types of products I am interested in. This will allow you to apply shopping-cart analysis against your search data and provide great cross-sale or advertising opportunity, or personalise the user experience based on your prediction of what the user may be interested in. This type of search is extremely beneficial in e-Commerce environments but achieving it out of the box with Commerce Server and SQL Full Text indexing can be challenging. In many deployments it is often easier to use an alternate search platform such as Microsoft's FAST, Apache SOLR, or Endecca, however you still want these products to integrate natively into Commerce Server to ensure that up-to-date inventory information is presented, profile information is generated, and you provide a consistant API. To do so we make the most of the Commerce Server extensibilty points called operation sequence components. In this example I will be talking about Apache Solr hosted on Apache Tomcat, in this specific example I have used the SolrNet C# library to interface to the Java platform. Also I am not going to talk about Solr configuration of indexing – but in a production envionrment this would typically happen by using Powershell to call the Commerce Server management webservice to export your catalog as XML, apply an XSLT transform to the file to make it conform to SOLR and use a simple HTTP Post to send it to the search enginge for indexing. Essentially a sequance component is a step in a serial workflow used to call a data repository (which in most cases is usually the Commerce Server pipelines or databases) and map to and from a Commerce Entity object whilst enforcing any business rules. So the first step in the process is to add a new class library to your existing Commerce Server site. You will need to use a new library as Sequence Components will need to be strongly named to be deployed. Once you are inside of your new project, add a new class file and add a reference to the Microsoft.Commerce.Providers, Microsoft.Commerce.Contracts and the Microsoft.Commerce.Broker assemblies. Now make your new class derive from the base object Microsoft.Commerce.Providers.Components.OperationSequanceComponent and overide the ExecuteQueryMethod. Your screen will then look something similar ot this: As all we are doing on this component is conducting a search we are only interested in the ExecuteQuery method. This method accepts three arguments, queryOperation, operationCache, and response. The queryOperation will be the object in which we receive our search parameters, the cache allows access to the Commerce Server cache allowing us to store regulary accessed information, and the response object is the object which we will return the result of our search upon. Inside this method is simply where we are going to inject our logic for our third party search platform. As I am not going to explain the inner-workings of actually making a SOLR call, I'll simply provide the sample code here. I would highly recommend however looking at the SolrNet wiki as they have some great explinations of how the API works. What you will find however is that there are some further extensions required when attempting to integrate a custom search provider. Firstly you out of the box the CommerceQueryOperation you will receive into the method when conducting a search against a catalog is specifically geared towards a SQL Full Text Search with properties such as a Where clause. To make the operation you receive more relevant you will need to create another class, this time derived from Microsoft.Commerce.Contract.Messages.CommerceSearchCriteria and within this you need to detail the properties you will require to allow you to submit as parameters to the SOLR search API. My exmaple looks like this: [DataContract(Namespace = "http://schemas.microsoft.com/microsoft-multi-channel-commerce-foundation/types/2008/03")] public class CommerceCatalogSolrSearch : CommerceSearchCriteria { private Dictionary<string, string> _facetQueries;   public CommerceCatalogSolrSearch() { _facetQueries = new Dictionary<String, String>();   }     public Dictionary<String, String> FacetQueries { get { return _facetQueries; } set { _facetQueries = value; } }   public String SearchPhrase{ get; set; } public int PageIndex { get; set; } public int PageSize { get; set; } public IEnumerable<String> Facets { get; set; }   public string Sort { get; set; }   public new int FirstItemIndex { get { return (PageIndex-1)*PageSize; } }   public int LastItemIndex { get { return FirstItemIndex + PageSize; } } }  To allow you to construct a CommerceQueryOperation call within the API you will also need to construct another class to derived from Microsoft.Commerce.Common.MessageBuilders.CommerceSearchCriteriaBuilder and is simply used to construct an instance of the CommerceQueryOperation you have just created and expose the properties you want set. My Message builder looks like this: public class CommerceCatalogSolrSearchBuilder : CommerceSearchCriteriaBuilder { private CommerceCatalogSolrSearch _solrSearch;   public CommerceCatalogSolrSearchBuilder() { _solrSearch = new CommerceCatalogSolrSearch(); }   public String SearchPhrase { get { return _solrSearch.SearchPhrase; } set { _solrSearch.SearchPhrase = value; } }   public int PageIndex { get { return _solrSearch.PageIndex; } set { _solrSearch.PageIndex = value; } }   public int PageSize { get { return _solrSearch.PageSize; } set { _solrSearch.PageSize = value; } }   public Dictionary<String,String> FacetQueries { get { return _solrSearch.FacetQueries; } set { _solrSearch.FacetQueries = value; } }   public String[] Facets { get { return _solrSearch.Facets.ToArray(); } set { _solrSearch.Facets = value; } } public override CommerceSearchCriteria ToSearchCriteria() { return _solrSearch; } }  Once you have these two classes in place you can now safely cast the CommerceOperation you receive as an argument of the overidden ExecuteQuery method in the SequenceComponent to the CommerceCatalogSolrSearch operation you have just created, e.g. public CommerceCatalogSolrSearch TryGetSearchCriteria(CommerceOperation operation) { var searchCriteria = operation as CommerceQueryOperation; if (searchCriteria == null) throw new Exception("No search criteria present");   var local = (CommerceCatalogSolrSearch) searchCriteria.SearchCriteria; if (local == null) throw new Exception("Unexpected Search Criteria in Operation");   return local; }  Now you have all of your search parameters present, you can go off an call the external search platform API. You will of-course get proprietry objects returned, so the next step in the process is to convert the results being returned back into CommerceEntities. You do this via another extensibility point within the Commerce Server API called translatators. Translators are another separate class, this time derived inheriting the interface Microsoft.Commerce.Providers.Translators.IToCommerceEntityTranslator . As you can imaginge this interface is specific for the conversion of the object TO a CommerceEntity, you will need to implement a separate interface if you also need to go in the opposite direction. If you implement the required method for the interace you will get a single translate method which has a source onkect, destination CommerceEntity, and a collection of properties as arguments. For simplicity sake in this example I have hard-coded the mappings, however best practice would dictate you map the objects using your metadatadefintions.xml file . Once complete your translator would look something like the following: public class SolrEntityTranslator : IToCommerceEntityTranslator { #region IToCommerceEntityTranslator Members   public void Translate(object source, CommerceEntity destinationCommerceEntity, CommercePropertyCollection propertiesToReturn) { if (source.GetType().Equals(typeof (SearchProduct))) { var searchResult = (SearchProduct) source;   destinationCommerceEntity.Id = searchResult.ProductId; destinationCommerceEntity.SetPropertyValue("DisplayName", searchResult.Title); destinationCommerceEntity.ModelName = "Product";   } }  Once you have a translator in place you can then safely map the results of your search platform into Commerce Entities and attach them on to the CommerceResponse object in a fashion similar to this: foreach (SearchProduct result in matchingProducts) { var destinationEntity = new CommerceEntity(_returnModelName);   Translator.ToCommerceEntity(result, destinationEntity, _queryOperation.Model.Properties); response.CommerceEntities.Add(destinationEntity); }  In SOLR I actually have two objects being returned – a product, and a collection of facets so I have an additional translator for facet (which maps to a custom facet CommerceEntity) and my facet response from SOLR is passed into the Translator helper class seperatley. When all of this is pieced together you have sucessfully completed the extensiblity point coding. You would have created a new OperationSequanceComponent, a custom SearchCritiera object and message builder class, and translators to convert the objects into Commerce Entities. Now you simply need to configure them, and can start calling them in your code. Make sure you sign you assembly, compile it and identiy its signature. Next you need to put this a reference of your new assembly into the Channel.Config configuration file replacing that of the existing SQL Full Text component: You will also need to add your translators to the Translators node of your Channel.Config too: Lastly add any custom CommerceEntities you have developed to your MetaDataDefintions.xml file. Your configuration is now complete, and you should now be able to happily make a call to the Commerce Foundation API, which will act as a proxy to your third party search platform and return back CommerceEntities of your search results. If you require data to be enriched, or logged, or any other logic applied then simply add further sequence components into the OperationSequence (obviously keeping the search response first) to the node of your Channel.Config file. Now to call your code you simply request it as per any other CommerceQuery operation, but taking into account you may be receiving multiple types of CommerceEntity returned: public KeyValuePair<FacetCollection ,List<Product>> DoFacetedProductQuerySearch(string searchPhrase, string orderKey, string sortOrder, int recordIndex, int recordsPerPage, Dictionary<string, string> facetQueries, out int totalItemCount) { var products = new List<Product>(); var query = new CommerceQuery<CatalogEntity, CommerceCatalogSolrSearchBuilder>();   query.SearchCriteria.PageIndex = recordIndex; query.SearchCriteria.PageSize = recordsPerPage; query.SearchCriteria.SearchPhrase = searchPhrase; query.SearchCriteria.FacetQueries = facetQueries;     totalItemCount = 0; CommerceResponse response = SiteContext.ProcessRequest(query.ToRequest()); var queryResponse = response.OperationResponses[0] as CommerceQueryOperationResponse;   // No results. Return the empty list if (queryResponse != null && queryResponse.CommerceEntities.Count == 0) return new KeyValuePair<FacetCollection, List<Product>>();   totalItemCount = (int)queryResponse.TotalItemCount;   // Prepare a multi-operation to retrieve the product variants var multiOperation = new CommerceMultiOperation();     //Add products to results foreach (Product product in queryResponse.CommerceEntities.Where(x => x.ModelName == "Product")) { var productQuery = new CommerceQuery<Product>(Product.ModelNameDefinition); productQuery.SearchCriteria.Model.Id = product.Id; productQuery.SearchCriteria.Model.CatalogId = product.CatalogId;   var variantQuery = new CommerceQueryRelatedItem<Variant>(Product.RelationshipName.Variants);   productQuery.RelatedOperations.Add(variantQuery);   multiOperation.Add(productQuery); }   CommerceResponse variantsResponse = SiteContext.ProcessRequest(multiOperation.ToRequest()); foreach (CommerceQueryOperationResponse queryOpResponse in variantsResponse.OperationResponses) { if (queryOpResponse.CommerceEntities.Count() > 0) products.Add(queryOpResponse.CommerceEntities[0]); }   //Get facet collection FacetCollection facetCollection = queryResponse.CommerceEntities.Where(x => x.ModelName == "FacetCollection").FirstOrDefault();     return new KeyValuePair<FacetCollection, List<Product>>(facetCollection, products); }    ..And that is it – simply a few classes and some configuration will allow you to extend the Commerce Server query operations to call a third party search platform, whilst still maintaing a unifed API in the remainder of your code. This logic stands for any extensibility within CommerceServer, which requires excution in a serial fashioon such as call to LOB systems or web service to validate or enrich data. Feel free to use this example on other applications, and if you have any questions please feel free to e-mail and I'll help out where I can!

    Read the article

  • hardy alternate cd customization and ubuntu-keyring-udeb

    - by gokul
    I have been trying to customize Ubuntu 8.04 (hardy heron) alternate install cd. I have followed the community documentation at https://help.ubuntu.com/community/InstallCDCustomization#Generating_a_new_ubuntu-keyring_.deb_to_sign_your_CD to rebuild the ubuntu-keyring packages. But when the media boots I get a warning: anna[7581]: WARNING **: bad md5sum. Though I have not been able to confirm that the message is for the ubunu-keyring-udeb package, the nearest debconf Adding [package] message is for ubuntu-keyring-udeb. This is followed by: INPUT critical retriever/cdrom/error. This message is already from syslog. I don't think dpkg.log will help in this case. I have tried modifying the md5sum file within the source package manually and signing it with my own public key, before building it. But that has not helped either. How do get the installer to work in this scenario? Alternatively, can I customize the contents of Ubuntu8.04 without signing anything?

    Read the article

  • An Alternate Vision of the Original Mario Movie [Video]

    - by Asian Angel
    In this alternate vision of the original movie, Joe Nicolosi shows us a Mario who is down and out on his luck and has lost his girlfriend to a yuppie, but refuses to give up. Can Mario turn things around? Warning: Video contains language that may be considered inappropriate. “Mario” – SXSW 2011 Film Bumper [via Geeks are Sexy] How to Enable Google Chrome’s Secret Gold IconHow to Create an Easy Pixel Art Avatar in Photoshop or GIMPInternet Explorer 9 Released: Here’s What You Need To Know

    Read the article

  • Netbook Remix 10.04 to 10.10 upgrade using alternate iso

    - by Suman Subramonian
    I'm using Netbook Remix 10.04 now. I'm having the 10.10 alternate iso with me. If I use that iso to upgrade, will I lose my netbook version? I've seen in some forums that the upgrade resulted in a change from netbook version to desktop version. Updated on 15/12/2010 I upgraded the OS from 10.04 to 10.10. But I'm getting an error like this after restart: modprobe: FATAL: Could not load /lib/modules/2.6.35-22-generic/modules.dep: No such file or directory I searched in the forums and got a solution like this: Open Terminal gksudo gedit /etc/initramfs-tools/initramfs.conf change the line MODULES=most to MODULES=dep Then use Synaptic (SystemAdministrationSynaptic Package Manager) to reinstall initramfs-tools. I'll be definitely trying this later. After that when the system tries to login, my screen starts flashing with just the ubuntu netbook desktop wallpaper on the screen. No other options will be available. It will be flashing continuosly. If I press the power button then one window comes up with options like Shut down, Restart, Hibernate etc. And the screen won't stop flashing either. I've uploaded a 1 min video. Please go through it as it will give you a clear idea of the error which I'm facing now Video Link Here

    Read the article

  • Gparted Partition Mount Points Alternate Between 2 Physical Disk Drives

    - by California Ken
    I'm running Ubuntu Server 14.04 on a system with 2 physical disk drives. I am frequently seeing mount errors on startup. When I check the drive partitions using GPARTED, I see that my two "non-system created" data partitions have the wrong disk assignments (i.e. sda1 vs sdb1) or visa-versa. If I hand edit /etc/fstab to match GPARTED, the system will boot error free one time. On the second restart I will get the "serious mount problem" error for the 2 data partitions and when I check GPARTED, the disk assignments have changed again (again, GPARTED and fstab don't match). A listing of my /etc/fstab is: /etc/fstab: static file system information. # Use 'blkid' to print the universally unique identifier for a device; this may be used with UUID= as a more robust way to name devices that works even if disks are added and removed. See fstab(5). # / was on /dev/sdb2 during installation UUID=766a06a4-e5af-484a-adf0-fa1e88da7212 / ext4 errors=remount-ro,user_xattr,acl,barrier=1 0 1 swap was on /dev/sda6 during installation UUID=8c42f835-ead3-43fb-88d8-196f5dfc3aa7 none swap sw 0 0 swap was on /dev/sdb3 during installation UUID=2214deec-ba98-47da-aea7-4e46998f3e57 none swap sw 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0 /dev/sda1 /media/ken/Linux-Data ext3 defaults 0 2 /dev/sda5 /media/ken/Data2 ext4 defaults 0 2 The device designations in the last 2 lines are the ones in question. The fstab entries to NOT change between system restarts but the mount points in the GPARTED display do. Does anyone have a fix for this? Thanks Mr. Young and Mr Gedak, Following is my fstab file and two blkid outputs. The fstab output is correct. The first blkid output was after a reboot and is WRONG! The sda and sdb device partition data is reversed. The 2nd blkid output was after a second reboot (fstab not changed). It shows the sda and adb partition data CORRECTLY. I didn't see any duplicate UUIDs. Does anyone have any idea why the GPARTED and blkid outputs alternate on consecutive reboots? The alternating partition data is real since when the partition assignments are reversed, the boot sequence halts with disk mounting errers (I have to press "s" to skip the mounts). Thanks again. Ken I copied the contents of a text file showing my fstab and 2 blkid outputs. The text file contents show up in the text entry box but does not appear in the main body of the question. Is there a way I can attach the text file or edit this question so that the text is displayed for question viewers?

    Read the article

  • How to install Edubuntu on a system with low memory (256 Mb)?

    - by int_ua
    I'm preparing an old system with 256 Mb RAM to send it to some children. It doesn't have Ethernet controller and there are no Internet access at the destination. I've chosen Edubuntu for obvious reasons and modified it with UCK trying to minimize memory usage just to install, let alone using it yet. But Ubiquity won't start even in openbox (edited /etc/lightdm/lightdm.conf) because there are no space left on /cow right after booting. I've already deleted things like ibus, zeitgeist, update-manager (no network access after all), twisted-core, plymouth logos. I'm thinking about creating a swap partition on HDD, can it be later added to expand this /cow ? Is there a package for the text-mode installation which is used on Alternate CDs? I don't want to re-create Edubuntu from an Alternate CD. This behavior is reproducible in VM limited to 256Mb RAM.

    Read the article

  • Using an alternate JSON Serializer in ASP.NET Web API

    - by Rick Strahl
    The new ASP.NET Web API that Microsoft released alongside MVC 4.0 Beta last week is a great framework for building REST and AJAX APIs. I've been working with it for quite a while now and I really like the way it works and the complete set of features it provides 'in the box'. It's about time that Microsoft gets a decent API for building generic HTTP endpoints into the framework. DataContractJsonSerializer sucks As nice as Web API's overall design is one thing still sucks: The built-in JSON Serialization uses the DataContractJsonSerializer which is just too limiting for many scenarios. The biggest issues I have with it are: No support for untyped values (object, dynamic, Anonymous Types) MS AJAX style Date Formatting Ugly serialization formats for types like Dictionaries To me the most serious issue is dealing with serialization of untyped objects. I have number of applications with AJAX front ends that dynamically reformat data from business objects to fit a specific message format that certain UI components require. The most common scenario I have there are IEnumerable query results from a database with fields from the result set rearranged to fit the sometimes unconventional formats required for the UI components (like jqGrid for example). Creating custom types to fit these messages seems like overkill and projections using Linq makes this much easier to code up. Alas DataContractJsonSerializer doesn't support it. Neither does DataContractSerializer for XML output for that matter. What this means is that you can't do stuff like this in Web API out of the box:public object GetAnonymousType() { return new { name = "Rick", company = "West Wind", entered= DateTime.Now }; } Basically anything that doesn't have an explicit type DataContractJsonSerializer will not let you return. FWIW, the same is true for XmlSerializer which also doesn't work with non-typed values for serialization. The example above is obviously contrived with a hardcoded object graph, but it's not uncommon to get dynamic values returned from queries that have anonymous types for their result projections. Apparently there's a good possibility that Microsoft will ship Json.NET as part of Web API RTM release.  Scott Hanselman confirmed this as a footnote in his JSON Dates post a few days ago. I've heard several other people from Microsoft confirm that Json.NET will be included and be the default JSON serializer, but no details yet in what capacity it will show up. Let's hope it ends up as the default in the box. Meanwhile this post will show you how you can use it today with the beta and get JSON that matches what you should see in the RTM version. What about JsonValue? To be fair Web API DOES include a new JsonValue/JsonObject/JsonArray type that allow you to address some of these scenarios. JsonValue is a new type in the System.Json assembly that can be used to build up an object graph based on a dictionary. It's actually a really cool implementation of a dynamic type that allows you to create an object graph and spit it out to JSON without having to create .NET type first. JsonValue can also receive a JSON string and parse it without having to actually load it into a .NET type (which is something that's been missing in the core framework). This is really useful if you get a JSON result from an arbitrary service and you don't want to explicitly create a mapping type for the data returned. For serialization you can create an object structure on the fly and pass it back as part of an Web API action method like this:public JsonValue GetJsonValue() { dynamic json = new JsonObject(); json.name = "Rick"; json.company = "West Wind"; json.entered = DateTime.Now; dynamic address = new JsonObject(); address.street = "32 Kaiea"; address.zip = "96779"; json.address = address; dynamic phones = new JsonArray(); json.phoneNumbers = phones; dynamic phone = new JsonObject(); phone.type = "Home"; phone.number = "808 123-1233"; phones.Add(phone); phone = new JsonObject(); phone.type = "Home"; phone.number = "808 123-1233"; phones.Add(phone); //var jsonString = json.ToString(); return json; } which produces the following output (formatted here for easier reading):{ name: "rick", company: "West Wind", entered: "2012-03-08T15:33:19.673-10:00", address: { street: "32 Kaiea", zip: "96779" }, phoneNumbers: [ { type: "Home", number: "808 123-1233" }, { type: "Mobile", number: "808 123-1234" }] } If you need to build a simple JSON type on the fly these types work great. But if you have an existing type - or worse a query result/list that's already formatted JsonValue et al. become a pain to work with. As far as I can see there's no way to just throw an object instance at JsonValue and have it convert into JsonValue dictionary. It's a manual process. Using alternate Serializers in Web API So, currently the default serializer in WebAPI is DataContractJsonSeriaizer and I don't like it. You may not either, but luckily you can swap the serializer fairly easily. If you'd rather use the JavaScriptSerializer built into System.Web.Extensions or Json.NET today, it's not too difficult to create a custom MediaTypeFormatter that uses these serializers and can replace or partially replace the native serializer. Here's a MediaTypeFormatter implementation using the ASP.NET JavaScriptSerializer:using System; using System.Net.Http.Formatting; using System.Threading.Tasks; using System.Web.Script.Serialization; using System.Json; using System.IO; namespace Westwind.Web.WebApi { public class JavaScriptSerializerFormatter : MediaTypeFormatter { public JavaScriptSerializerFormatter() { SupportedMediaTypes.Add(new System.Net.Http.Headers.MediaTypeHeaderValue("application/json")); } protected override bool CanWriteType(Type type) { // don't serialize JsonValue structure use default for that if (type == typeof(JsonValue) || type == typeof(JsonObject) || type== typeof(JsonArray) ) return false; return true; } protected override bool CanReadType(Type type) { if (type == typeof(IKeyValueModel)) return false; return true; } protected override System.Threading.Tasks.Taskobject OnReadFromStreamAsync(Type type, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext) { var task = Taskobject.Factory.StartNew(() = { var ser = new JavaScriptSerializer(); string json; using (var sr = new StreamReader(stream)) { json = sr.ReadToEnd(); sr.Close(); } object val = ser.Deserialize(json,type); return val; }); return task; } protected override System.Threading.Tasks.Task OnWriteToStreamAsync(Type type, object value, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext, System.Net.TransportContext transportContext) { var task = Task.Factory.StartNew( () = { var ser = new JavaScriptSerializer(); var json = ser.Serialize(value); byte[] buf = System.Text.Encoding.Default.GetBytes(json); stream.Write(buf,0,buf.Length); stream.Flush(); }); return task; } } } Formatter implementation is pretty simple: You override 4 methods to tell which types you can handle and then handle the input or output streams to create/parse the JSON data. Note that when creating output you want to take care to still allow JsonValue/JsonObject/JsonArray types to be handled by the default serializer so those objects serialize properly - if you let either JavaScriptSerializer or JSON.NET handle them they'd try to render the dictionaries which is very undesirable. If you'd rather use Json.NET here's the JSON.NET version of the formatter:// this code requires a reference to JSON.NET in your project #if true using System; using System.Net.Http.Formatting; using System.Threading.Tasks; using System.Web.Script.Serialization; using System.Json; using Newtonsoft.Json; using System.IO; using Newtonsoft.Json.Converters; namespace Westwind.Web.WebApi { public class JsonNetFormatter : MediaTypeFormatter { public JsonNetFormatter() { SupportedMediaTypes.Add(new System.Net.Http.Headers.MediaTypeHeaderValue("application/json")); } protected override bool CanWriteType(Type type) { // don't serialize JsonValue structure use default for that if (type == typeof(JsonValue) || type == typeof(JsonObject) || type == typeof(JsonArray)) return false; return true; } protected override bool CanReadType(Type type) { if (type == typeof(IKeyValueModel)) return false; return true; } protected override System.Threading.Tasks.Taskobject OnReadFromStreamAsync(Type type, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext) { var task = Taskobject.Factory.StartNew(() = { var settings = new JsonSerializerSettings() { NullValueHandling = NullValueHandling.Ignore, }; var sr = new StreamReader(stream); var jreader = new JsonTextReader(sr); var ser = new JsonSerializer(); ser.Converters.Add(new IsoDateTimeConverter()); object val = ser.Deserialize(jreader, type); return val; }); return task; } protected override System.Threading.Tasks.Task OnWriteToStreamAsync(Type type, object value, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext, System.Net.TransportContext transportContext) { var task = Task.Factory.StartNew( () = { var settings = new JsonSerializerSettings() { NullValueHandling = NullValueHandling.Ignore, }; string json = JsonConvert.SerializeObject(value, Formatting.Indented, new JsonConverter[1] { new IsoDateTimeConverter() } ); byte[] buf = System.Text.Encoding.Default.GetBytes(json); stream.Write(buf,0,buf.Length); stream.Flush(); }); return task; } } } #endif   One advantage of the Json.NET serializer is that you can specify a few options on how things are formatted and handled. You get null value handling and you can plug in the IsoDateTimeConverter which is nice to product proper ISO dates that I would expect any Json serializer to output these days. Hooking up the Formatters Once you've created the custom formatters you need to enable them for your Web API application. To do this use the GlobalConfiguration.Configuration object and add the formatter to the Formatters collection. Here's what this looks like hooked up from Application_Start in a Web project:protected void Application_Start(object sender, EventArgs e) { // Action based routing (used for RPC calls) RouteTable.Routes.MapHttpRoute( name: "StockApi", routeTemplate: "stocks/{action}/{symbol}", defaults: new { symbol = RouteParameter.Optional, controller = "StockApi" } ); // WebApi Configuration to hook up formatters and message handlers // optional RegisterApis(GlobalConfiguration.Configuration); } public static void RegisterApis(HttpConfiguration config) { // Add JavaScriptSerializer formatter instead - add at top to make default //config.Formatters.Insert(0, new JavaScriptSerializerFormatter()); // Add Json.net formatter - add at the top so it fires first! // This leaves the old one in place so JsonValue/JsonObject/JsonArray still are handled config.Formatters.Insert(0, new JsonNetFormatter()); } One thing to remember here is the GlobalConfiguration object which is Web API's static configuration instance. I think this thing is seriously misnamed given that GlobalConfiguration could stand for anything and so is hard to discover if you don't know what you're looking for. How about WebApiConfiguration or something more descriptive? Anyway, once you know what it is you can use the Formatters collection to insert your custom formatter. Note that I insert my formatter at the top of the list so it takes precedence over the default formatter. I also am not removing the old formatter because I still want JsonValue/JsonObject/JsonArray to be handled by the default serialization mechanism. Since they process in sequence and I exclude processing for these types JsonValue et al. still get properly serialized/deserialized. Summary Currently DataContractJsonSerializer in Web API is a pain, but at least we have the ability with relatively limited effort to replace the MediaTypeFormatter and plug in our own JSON serializer. This is useful for many scenarios - if you have existing client applications that used MVC JsonResult or ASP.NET AJAX results from ASMX AJAX services you can plug in the JavaScript serializer and get exactly the same serializer you used in the past so your results will be the same and don't potentially break clients. JSON serializers do vary a bit in how they serialize some of the more complex types (like Dictionaries and dates for example) and so if you're migrating it might be helpful to ensure your client code doesn't break when you switch to ASP.NET Web API. Going forward it looks like Microsoft is planning on plugging in Json.Net into Web API and make that the default. I think that's an awesome choice since Json.net has been around forever, is fast and easy to use and provides a ton of functionality as part of this great library. I just wish Microsoft would have figured this out sooner instead of now at the last minute integrating with it especially given that Json.Net has a similar set of lower level JSON objects JsonValue/JsonObject etc. which now will end up being duplicated by the native System.Json stuff. It's not like we don't already have enough confusion regarding which JSON serializer to use (JavaScriptSerializer, DataContractJsonSerializer, JsonValue/JsonObject/JsonArray and now Json.net). For years I've been using my own JSON serializer because the built in choices are both limited. However, with an official encorsement of Json.Net I'm happily moving on to use that in my applications. Let's see and hope Microsoft gets this right before ASP.NET Web API goes gold.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api  AJAX  ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Alternate port numbers for Supermicro IPMI View software

    - by MC9000
    I'm using the IPMI View software to manage a SuperMicro server but would like to use alternate port #s within the program itself. In other words - If I use the web browser, it defaults to port 80 - While I can, say change that port to 12345 (or whatever) and type the IP address into the browser (like http://xxx.xxx.xxx.xxx:12345 ) that works just fine. However, in IPMIView, it will assume port 80 and load the browser with the IP (which, naturally, won't work, so I have to manually type in the alternate port #). I can deal with that. The clincher is if I use a port other than 623 for management - (say 55623 for example), the IPMIView will not find it. Same goes for the iKVM port #. Is there some place to specify this (to tell IPMIView to use the alternate port numbers), like a settings file? I'm running this from a Windows client.

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >