Search Results

Search found 34758 results on 1391 pages for 'linear linked list invert'.

Page 379/1391 | < Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >

  • Turn the Cables from Your Gaming or Electronics Setup into a Wall-Based Work of Art

    - by Asian Angel
    When it comes to gaming setups or our favorite electronics, there are always lots of cables to deal with. Now you could just bundle those cables up and try to hide them as best you can or you could turn them into a work of art… You can see additional pictures and find out how to set up your own wall-based work of art with those cables by visiting the blog post linked below. Feature Friday: Cords Away [via There I Fixed It] Here’s How to Download Windows 8 Release Preview Right Now HTG Explains: Why Linux Doesn’t Need Defragmenting How to Convert News Feeds to Ebooks with Calibre

    Read the article

  • Telerik Silverlight Grid with BCS Lists in SharePoint 2010

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Okay my next video is online. In this video, I demonstrate the usage of the Telerik Silverlight grid working with a Business Connectivity Services (BCS) list over the Client Object Model. I use the Telerik grid to create a view on a BCS list, and demonstrate the rich value that a nice Silverlight grid can bring into SharePoint 2010. The entire presentation is mostly all code. It’s about 1/2 hr in length. Watch the video Comment on the article ....

    Read the article

  • Tor check failed though Vidalia shows green onion

    - by Wolter Hellmund
    I have installed tor successfully and Vidalia shows it is running without problems; however, when I check if I am using tor in this website I get an error message saying I am not using tor. I have tried two things to fix this: I installed ProxySwitchy on Google Chrome, and created a profile for Tor (with address 127.0.0.1, port 8118), but enabling the proxy doesn't change the results in the tor check website linked before. I changed my network proxy settings through System Settings Network from None to Manual, and selected as address always 127.0.0.1 and as port 8118 for all but for the socket, for which I entered 9050 instead. This makes internet stop working completely. How can I fix this problem?

    Read the article

  • Mounting a Microsoft Azure CloudDrive in a VMRole

    - by SeanBarlow
    Mounting a Drive in a VMRole is a little more complicated then a web or worker.  The Web and Worker roles offer OnStart and OnStop events, which you can use to mount or unmount your drives. The VMRole does not have these same events so you have to provide another way for the drives to be mounted or unmounted. The problem I have run into is what if you have multiple drives and you only want to mount certain drives. How do you let your user mount the drive. I am not going to go into details on what kind of GUI to present to the user. I have done this in a simple WPF application as well as a console application. We are going to need to get the storage account details. One thing to note when you are mounting cloud drives you cannot use https and have to use http. We force the use of http by using false when we create the CloudStorageAccount.   StorageCredentialsAccountAndKey credentials = new StorageCredentialsAccountAndKey("AccountName", "AccountKey"); CloudStorageAccount storageAccount = new CloudStorageAccount(credentials, false);   Next we need to get a reference to the container.   var blobClient = storageAccount.CreateCloudBlobClient(); var container = blobClient.GetContainerReference("ContainerName");   Now we need to get a list of the drives in the container   var drives = container.ListBlobs();   Now that we have a list of the drives in the container we can let the user choose which drive they want to mount. I am just selecting the 1st drive in the list for the example and getting the Uri of the drive.   var driveUri = drives.First().Uri;   Now that we have the Uri we need to get the reference to the drive. var drive = new CloudDrive(driveUri, storageAccount.Credentials);   Now all that is left is to mount the drive.   var driveLetter = drive.Mount(0, DriveMountOptions.None);   To unmount the drive all you have to do is call unmount on the drive. drive.Unmount();   You do need to make sure you unount the drives when you are done with them. I have run into issues with the drives being locked until the VMRole is rebooted. I have also managed to have a drive be permanently locked and I was forced to delete it and upload it again. I have been unable to reproduce the permanent lock but I am still trying. The CloudDrive class provides a handy method to retrieve all the mounted drives in the Role. foreach (var drive in CloudDrive.GetMountedDrives()) {          var mountedDrive = Account.CreateCloudDrive(drive.Value.PathAndQuery);          mountedDrive.Unmount(); }

    Read the article

  • Adding Custom Pipeline code to the Pipeline Component Toolbox

    - by Geordie
    Add ... "C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bin\GacUtil.exe" /nologo /i "$(TargetPath)" /f  xcopy "$(TargetPath)" "C:\Program Files\Microsoft BizTalk Server 2006\Pipeline Components" /d /y to the 'Post Build event command line' for the pipeline support component project in visual studio. This will automatically put the support dll in the GAC and BizTalk’s Pipeline component directory on building the solution/project. Build the project. Right Click on title bar and select Choose items. Click on the BizTalk Pipeline Components tab and select the new component from the list. If it is not available in the list Browse to the BizTalk pipeline component folder (C:\Program Files\Microsoft BizTalk Server 2006\Pipeline Components) and select the dll.

    Read the article

  • How to Assign a Default Signature in Outlook 2013

    - by Lori Kaufman
    If you sign most of your emails the same way, you can easily specify a default signature to automatically insert into new email messages and replies and forwards. This can be done directly in the Signature editor in Outlook 2013. We recently showed you how to create a new signature. You can also create multiple signatures for each email account and define a different default signature for each account. When you change your sending account when composing a new email message, the signature would change automatically as well. NOTE: To have a signature added automatically to new email messages and replies and forwards, you must have a default signature assigned in each email account. If you don’t want a signature in every account, you can create a signature with just a space, a full stop, dashes, or other generic characters. To assign a default signature, open Outlook and click the File tab. Click Options in the menu list on the left side of the Account Information screen. On the Outlook Options dialog box, click Mail in the list of options on the left side of the dialog box. On the Mail screen, click Signatures in the Compose messages section. To change the default signature for an email account, select the account from the E-mail account drop-down list on the top, right side of the dialog box under Choose default signature. Then, select the signature you want to use by default for New messages and for Replies/forwards from the other two drop-down lists. Click OK to accept your changes and close the dialog box. Click OK on the Outlook Options dialog box to close it. You can also access the Signatures and Stationery dialog box from the Message window for new emails and drafts. Click New Email on the Home tab or double-click an email in the Drafts folder to access the Message window. Click Signature in the Include section of the New Mail Message window and select Signatures from the drop-down menu. In the next few days, we will be covering how to use the features of the signature editor next, and then how to insert and change signatures manually, backup and restore your signatures, and modify a signature for use in plain text emails.     

    Read the article

  • How to make sure that grub does use menu.lst?

    - by Glen S. Dalton
    On my Ubuntu 9.04 ("Karmic") laptop I suspect grub does not use the /boot/grub/menu.lst file. What happens on boot is that I see a blank screen and nothing happens. When I press ESC I see a boot list which is different from what I would expect from the menu.lst file. The menu lines are different and when I choose the first entry it does not use the kernel options that are in the first entry in menu.lst. Where do the entries that grub uses come from? How can I find out what happens, is there a log? I could not find anything in /var/log/syslog or /var/log/dmesg about grub using a menu.lst. How can I set it to work like expected? Some Files: $ sudo ls -la /boot/grub/*lst -rw-r--r-- 1 root root 1558 2009-12-12 15:25 /boot/grub/command.lst -rw-r--r-- 1 root root 121 2009-12-12 15:25 /boot/grub/fs.lst -rw-r--r-- 1 root root 272 2009-12-12 15:25 /boot/grub/handler.lst -rw-r--r-- 1 root root 4576 2010-03-19 11:26 /boot/grub/menu.lst -rw-r--r-- 1 root root 1657 2009-12-12 15:25 /boot/grub/moddep.lst -rw-r--r-- 1 root root 62 2009-12-12 15:25 /boot/grub/partmap.lst -rw-r--r-- 1 root root 22 2009-12-12 15:25 /boot/grub/parttool.lst $ sudo ls -la /vm* lrwxrwxrwx 1 root root 30 2009-12-12 16:15 /vmlinuz -> boot/vmlinuz-2.6.31-16-generic lrwxrwxrwx 1 root root 30 2009-12-12 14:07 /vmlinuz.old -> boot/vmlinuz-2.6.31-14-generic $ sudo ls -la /init* lrwxrwxrwx 1 root root 33 2009-12-12 16:15 /initrd.img -> boot/initrd.img-2.6.31-16-generic lrwxrwxrwx 1 root root 33 2009-12-12 14:07 /initrd.img.old -> boot/initrd.img-2.6.31-14-generic The only menu.lst that I found: $ sudo find / -name "menu.lst" /boot/grub/menu.lst $ sudo cat /boot/grub/menu.lst # menu.lst - See: grub(8), info grub, update-grub(8) # grub-install(8), grub-floppy(8), # grub-md5-crypt, /usr/share/doc/grub # and /usr/share/doc/grub-doc/. ## default num # Set the default entry to the entry number NUM. Numbering starts from 0, and # the entry number 0 is the default if the command is not used. # # You can specify 'saved' instead of a number. In this case, the default entry # is the entry saved with the command 'savedefault'. # WARNING: If you are using dmraid do not use 'savedefault' or your # array will desync and will not let you boot your system. default 0 ## timeout sec # Set a timeout, in SEC seconds, before automatically booting the default entry # (normally the first entry defined). timeout 3 ## hiddenmenu # Hides the menu by default (press ESC to see the menu) #hiddenmenu # Pretty colours color cyan/blue white/blue ## password ['--md5'] passwd # If used in the first section of a menu file, disable all interactive editing # control (menu entry editor and command-line) and entries protected by the # command 'lock' # e.g. password topsecret # password --md5 $1$gLhU0/$aW78kHK1QfV3P2b2znUoe/ # password topsecret # examples # # title Windows 95/98/NT/2000 # root (hd0,0) # makeactive # chainloader +1 # # title Linux # root (hd0,1) # kernel /vmlinuz root=/dev/hda2 ro # Put static boot stanzas before and/or after AUTOMAGIC KERNEL LIST ### BEGIN AUTOMAGIC KERNELS LIST ## lines between the AUTOMAGIC KERNELS LIST markers will be modified ## by the debian update-grub script except for the default options below ## DO NOT UNCOMMENT THEM, Just edit them to your needs ## ## Start Default Options ## ## default kernel options ## default kernel options for automagic boot options ## If you want special options for specific kernels use kopt_x_y_z ## where x.y.z is kernel version. Minor versions can be omitted. ## e.g. kopt=root=/dev/hda1 ro ## kopt_2_6_8=root=/dev/hdc1 ro ## kopt_2_6_8_2_686=root=/dev/hdc2 ro # kopt=root=UUID=9b454298-18e1-43f7-a5bc-f56e7ed5f9c6 ro noresume ## default grub root device ## e.g. groot=(hd0,0) # groot=70fcd2b0-0ee0-4fe6-9acb-322ef74c1cdf ## should update-grub create alternative automagic boot options ## e.g. alternative=true ## alternative=false # alternative=true ## should update-grub lock alternative automagic boot options ## e.g. lockalternative=true ## lockalternative=false # lockalternative=false ## additional options to use with the default boot option, but not with the ## alternatives ## e.g. defoptions=vga=791 resume=/dev/hda5 ## defoptions=quiet splash # defoptions=apm=on acpi=off ## should update-grub lock old automagic boot options ## e.g. lockold=false ## lockold=true # lockold=false ## Xen hypervisor options to use with the default Xen boot option # xenhopt= ## Xen Linux kernel options to use with the default Xen boot option # xenkopt=console=tty0 ## altoption boot targets option ## multiple altoptions lines are allowed ## e.g. altoptions=(extra menu suffix) extra boot options ## altoptions=(recovery) single # altoptions=(recovery mode) single ## controls how many kernels should be put into the menu.lst ## only counts the first occurence of a kernel, not the ## alternative kernel options ## e.g. howmany=all ## howmany=7 # howmany=all ## specify if running in Xen domU or have grub detect automatically ## update-grub will ignore non-xen kernels when running in domU and vice versa ## e.g. indomU=detect ## indomU=true ## indomU=false # indomU=detect ## should update-grub create memtest86 boot option ## e.g. memtest86=true ## memtest86=false # memtest86=true ## should update-grub adjust the value of the default booted system ## can be true or false # updatedefaultentry=false ## should update-grub add savedefault to the default options ## can be true or false # savedefault=false ## ## End Default Options ## title Ubuntu 9.10, kernel 2.6.31-14-generic noresume uuid 70fcd2b0-0ee0-4fe6-9acb-322ef74c1cdf kernel /vmlinuz-2.6.31-14-generic root=UUID=9b454298-18e1-43f7-a5bc-f56e7ed5f9c6 ro quiet splash apm=on acpi=off noresume initrd /initrd.img-2.6.31-14-generic title Ubuntu 9.10, kernel 2.6.31-14-generic (recovery mode) uuid 70fcd2b0-0ee0-4fe6-9acb-322ef74c1cdf kernel /vmlinuz-2.6.31-14-generic root=UUID=9b454298-18e1-43f7-a5bc-f56e7ed5f9c6 ro sing le initrd /initrd.img-2.6.31-14-generic title Ubuntu 9.10, memtest86+ uuid 70fcd2b0-0ee0-4fe6-9acb-322ef74c1cdf kernel /memtest86+.bin ### END DEBIAN AUTOMAGIC KERNELS LIST These are the choices that grub displays after i press ESC: Ubuntu, Linux 2-6-31-16-generic Ubuntu, Linux 2-6-31-16-generic (recovery mode) Ubuntu, Linux 2-6-31-14-generic Ubuntu, Linux 2-6-31-14-generic (recovery mode) Memory test (memtest86+) Memory test (memtest86+, serial console 115200)

    Read the article

  • Adding and accessing custom sections in your C# App.config

    - by deadlydog
    So I recently thought I’d try using the app.config file to specify some data for my application (such as URLs) rather than hard-coding it into my app, which would require a recompile and redeploy of my app if one of our URLs changed.  By using the app.config it allows a user to just open up the .config file that sits beside their .exe file and edit the URLs right there and then re-run the app; no recompiling, no redeployment necessary. I spent a good few hours fighting with the app.config and looking at examples on Google before I was able to get things to work properly.  Most of the examples I found showed you how to pull a value from the app.config if you knew the specific key of the element you wanted to retrieve, but it took me a while to find a way to simply loop through all elements in a section, so I thought I would share my solutions here.   Simple and Easy The easiest way to use the app.config is to use the built-in types, such as NameValueSectionHandler.  For example, if we just wanted to add a list of database server urls to use in my app, we could do this in the app.config file like so: 1: <?xml version="1.0" encoding="utf-8" ?> 2: <configuration> 3: <configSections> 4: <section name="ConnectionManagerDatabaseServers" type="System.Configuration.NameValueSectionHandler" /> 5: </configSections> 6: <startup> 7: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" /> 8: </startup> 9: <ConnectionManagerDatabaseServers> 10: <add key="localhost" value="localhost" /> 11: <add key="Dev" value="Dev.MyDomain.local" /> 12: <add key="Test" value="Test.MyDomain.local" /> 13: <add key="Live" value="Prod.MyDomain.com" /> 14: </ConnectionManagerDatabaseServers> 15: </configuration>   And then you can access these values in code like so: 1: string devUrl = string.Empty; 2: var connectionManagerDatabaseServers = ConfigurationManager.GetSection("ConnectionManagerDatabaseServers") as NameValueCollection; 3: if (connectionManagerDatabaseServers != null) 4: { 5: devUrl = connectionManagerDatabaseServers["Dev"].ToString(); 6: }   Sometimes though you don’t know what the keys are going to be and you just want to grab all of the values in that ConnectionManagerDatabaseServers section.  In that case you can get them all like this: 1: // Grab the Environments listed in the App.config and add them to our list. 2: var connectionManagerDatabaseServers = ConfigurationManager.GetSection("ConnectionManagerDatabaseServers") as NameValueCollection; 3: if (connectionManagerDatabaseServers != null) 4: { 5: foreach (var serverKey in connectionManagerDatabaseServers.AllKeys) 6: { 7: string serverValue = connectionManagerDatabaseServers.GetValues(serverKey).FirstOrDefault(); 8: AddDatabaseServer(serverValue); 9: } 10: }   And here we just assume that the AddDatabaseServer() function adds the given string to some list of strings.  So this works great, but what about when we want to bring in more values than just a single string (or technically you could use this to bring in 2 strings, where the “key” could be the other string you want to store; for example, we could have stored the value of the Key as the user-friendly name of the url).   More Advanced (and more complicated) So if you want to bring in more information than a string or two per object in the section, then you can no longer simply use the built-in System.Configuration.NameValueSectionHandler type provided for us.  Instead you have to build your own types.  Here let’s assume that we again want to configure a set of addresses (i.e. urls), but we want to specify some extra info with them, such as the user-friendly name, if they require SSL or not, and a list of security groups that are allowed to save changes made to these endpoints. So let’s start by looking at the app.config: 1: <?xml version="1.0" encoding="utf-8" ?> 2: <configuration> 3: <configSections> 4: <section name="ConnectionManagerDataSection" type="ConnectionManagerUpdater.Data.Configuration.ConnectionManagerDataSection, ConnectionManagerUpdater" /> 5: </configSections> 6: <startup> 7: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" /> 8: </startup> 9: <ConnectionManagerDataSection> 10: <ConnectionManagerEndpoints> 11: <add name="Development" address="Dev.MyDomain.local" useSSL="false" /> 12: <add name="Test" address="Test.MyDomain.local" useSSL="true" /> 13: <add name="Live" address="Prod.MyDomain.com" useSSL="true" securityGroupsAllowedToSaveChanges="ConnectionManagerUsers" /> 14: </ConnectionManagerEndpoints> 15: </ConnectionManagerDataSection> 16: </configuration>   The first thing to notice here is that my section is now using the type “ConnectionManagerUpdater.Data.Configuration.ConnectionManagerDataSection” (the fully qualified path to my new class I created) “, ConnectionManagerUpdater” (the name of the assembly my new class is in).  Next, you will also notice an extra layer down in the <ConnectionManagerDataSection> which is the <ConnectionManagerEndpoints> element.  This is a new collection class that I created to hold each of the Endpoint entries that are defined.  Let’s look at that code now: 1: using System; 2: using System.Collections.Generic; 3: using System.Configuration; 4: using System.Linq; 5: using System.Text; 6: using System.Threading.Tasks; 7:  8: namespace ConnectionManagerUpdater.Data.Configuration 9: { 10: public class ConnectionManagerDataSection : ConfigurationSection 11: { 12: /// <summary> 13: /// The name of this section in the app.config. 14: /// </summary> 15: public const string SectionName = "ConnectionManagerDataSection"; 16: 17: private const string EndpointCollectionName = "ConnectionManagerEndpoints"; 18:  19: [ConfigurationProperty(EndpointCollectionName)] 20: [ConfigurationCollection(typeof(ConnectionManagerEndpointsCollection), AddItemName = "add")] 21: public ConnectionManagerEndpointsCollection ConnectionManagerEndpoints { get { return (ConnectionManagerEndpointsCollection)base[EndpointCollectionName]; } } 22: } 23:  24: public class ConnectionManagerEndpointsCollection : ConfigurationElementCollection 25: { 26: protected override ConfigurationElement CreateNewElement() 27: { 28: return new ConnectionManagerEndpointElement(); 29: } 30: 31: protected override object GetElementKey(ConfigurationElement element) 32: { 33: return ((ConnectionManagerEndpointElement)element).Name; 34: } 35: } 36: 37: public class ConnectionManagerEndpointElement : ConfigurationElement 38: { 39: [ConfigurationProperty("name", IsRequired = true)] 40: public string Name 41: { 42: get { return (string)this["name"]; } 43: set { this["name"] = value; } 44: } 45: 46: [ConfigurationProperty("address", IsRequired = true)] 47: public string Address 48: { 49: get { return (string)this["address"]; } 50: set { this["address"] = value; } 51: } 52: 53: [ConfigurationProperty("useSSL", IsRequired = false, DefaultValue = false)] 54: public bool UseSSL 55: { 56: get { return (bool)this["useSSL"]; } 57: set { this["useSSL"] = value; } 58: } 59: 60: [ConfigurationProperty("securityGroupsAllowedToSaveChanges", IsRequired = false)] 61: public string SecurityGroupsAllowedToSaveChanges 62: { 63: get { return (string)this["securityGroupsAllowedToSaveChanges"]; } 64: set { this["securityGroupsAllowedToSaveChanges"] = value; } 65: } 66: } 67: }   So here the first class we declare is the one that appears in the <configSections> element of the app.config.  It is ConnectionManagerDataSection and it inherits from the necessary System.Configuration.ConfigurationSection class.  This class just has one property (other than the expected section name), that basically just says I have a Collection property, which is actually a ConnectionManagerEndpointsCollection, which is the next class defined.  The ConnectionManagerEndpointsCollection class inherits from ConfigurationElementCollection and overrides the requied fields.  The first tells it what type of Element to create when adding a new one (in our case a ConnectionManagerEndpointElement), and a function specifying what property on our ConnectionManagerEndpointElement class is the unique key, which I’ve specified to be the Name field. The last class defined is the actual meat of our elements.  It inherits from ConfigurationElement and specifies the properties of the element (which can then be set in the xml of the App.config).  The “ConfigurationProperty” attribute on each of the properties tells what we expect the name of the property to correspond to in each element in the app.config, as well as some additional information such as if that property is required and what it’s default value should be. Finally, the code to actually access these values would look like this: 1: // Grab the Environments listed in the App.config and add them to our list. 2: var connectionManagerDataSection = ConfigurationManager.GetSection(ConnectionManagerDataSection.SectionName) as ConnectionManagerDataSection; 3: if (connectionManagerDataSection != null) 4: { 5: foreach (ConnectionManagerEndpointElement endpointElement in connectionManagerDataSection.ConnectionManagerEndpoints) 6: { 7: var endpoint = new ConnectionManagerEndpoint() { Name = endpointElement.Name, ServerInfo = new ConnectionManagerServerInfo() { Address = endpointElement.Address, UseSSL = endpointElement.UseSSL, SecurityGroupsAllowedToSaveChanges = endpointElement.SecurityGroupsAllowedToSaveChanges.Split(',').Where(e => !string.IsNullOrWhiteSpace(e)).ToList() } }; 8: AddEndpoint(endpoint); 9: } 10: } This looks very similar to what we had before in the “simple” example.  The main points of interest are that we cast the section as ConnectionManagerDataSection (which is the class we defined for our section) and then iterate over the endpoints collection using the ConnectionManagerEndpoints property we created in the ConnectionManagerDataSection class.   Also, some other helpful resources around using app.config that I found (and for parts that I didn’t really explain in this article) are: How do you use sections in C# 4.0 app.config? (Stack Overflow) <== Shows how to use Section Groups as well, which is something that I did not cover here, but might be of interest to you. How to: Create Custom Configuration Sections Using Configuration Section (MSDN) ConfigurationSection Class (MSDN) ConfigurationCollectionAttribute Class (MSDN) ConfigurationElementCollection Class (MSDN)   I hope you find this helpful.  Feel free to leave a comment.  Happy Coding!

    Read the article

  • How to start up rsyslog automatically?

    - by Enrique Videni
    Check current status of rsyslog $ chkconfig --list rsyslog rsyslog 0:off 1:off 2:off 3:off 4:off 5:off 6:off Start up rsyslog at some levels $ sudo chkconfig --level 35 rsyslog on It outputs these information: insserv: warning: script 'plymouth-stop' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `plymouth-stop' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `plymouth-stop' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'failsafe-x' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `failsafe-x' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `failsafe-x' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'udevtrigger' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `udevtrigger' Check current status of rsyslog again $ chkconfig --list rsyslog rsyslog 0:off 1:off 2:off 3:off 4:off 5:off 6:off I am a novice. Please show me how to use rsyslog from beginning.

    Read the article

  • Microsoft TechEd 2010 - Day 2 @ Bangalore

    - by sathya
    Microsoft TechEd 2010 - Day 2 @ Bangalore Today is the day 2 @ Microsoft TechEd 2010. We had lot of technical sessions as usual there were many tracks going on side by side and I was attending the Web simplified track, Which comprised of the following sessions :   Developing a scalable Media Application using ASP.NET MVC - This was a kind of little advanced stuff. Anyways I couldn't understand much because this was not my piece of cake and I havent worked on this before ASP.Net MVC Unplugged - This was really great because this session covered from the basics of MVC showing what is Model,View and Controller and how it worked and the speaker went into the details of the same. Building RESTful Applications with the Open Data Protocol - There were some concepts explained about this from the basics on how to build RESTful Services and it went on till some advanced configurations of the same. Developing Scalable Web Applications with AppFabric Caching - This session showed about the integration of AppFabric with the .Net Web Applications. Instead of using Inproc Sessions, we can use this AppFabric as a substitute for Caching and outofProc Session Storage without writing code and doing a little bit of configurations which brings in High Scalability, performance to our applications. (But unfortunately there were no demos for this session ) Deep Dive : WCF RIA Services - This session was also an interactive one, in this the speaker presented from the basics of WCF and took a Book Store Application as a sample and explained all details concepts on linking with RIA Services   Apart from these sessions, in between there happened some small events in the breaks like Some discussions about Technology, Innovations Music Jokes Mimicry, etc. And on doing all these things, the developers were given some kool gifts / goodies like USBs, T-Shirts, etc. And today I got a chance to do the following certification : (70-562) Microsoft Certified Technology Specialist in .NET 3.5 Web Applications Since I already have an MCTS in .NET 2.0, I wanted to do an MCPD and for doing the same I was required to do an update to my MCTS with the .NET 3.5 framework and I did the same I cleared it and now am an MCTS in .NET 3.5 Web Apps And on doing this I got a T-Shirt and they gave something called Learning $ of worth 30$. And in various stalls for attending each quiz or some game or some referrals we got some Learning $ which we can redeem later based on our Total Learning $. I got 105 $ which i was able to redeem and got a Microsoft Learning BagPack, 1 free Microsoft certification offer, a laptop light and an e-learning content activated. And after all these sessions and small events, we had something called Demo Extravaganza like I mentioned yesterday. This was a great funfilled event with lot of goodies for the attendees. There were some lucky draw which enabled 2 attendees to get Netbooks (Sponsored by Intel) and 1 attendee to get X-box (Sponsored by Citrix). After Choosing the raffle in the lucky draw they kept it on a device called Microsoft Surface which is a kind of big touch screen device and on putting the raffle on that it detected the code of the attendee and said intelligently how many sessions that person has attended and if he has attended more than 5 he got a Netbook and this was coded by a guy called Imran. Apart from they showed demos on : Research by 2 Tamilnadu students from Krishna Arts and Science college, taken 1200 photographs of their college from different angles and put that up in Bing maps using silverlight and linked with Photosynth, which showed a 3d view of their college based on the photos they uploaded Reasearch by Microsoft on Panaramic HD views of the images. One young guy from Microsoft Research showed a demo of this on Srivilliputhur Andal Temple, in Tamil Nadu and its history with a panoramic view of the temple and the near by places with narration of the historical information on the same and with the videos embedded in it with high definition images which we can zoom to a very detailed level. Some Demo on a business app with Silverlight, Business Intelligence (BI) and maps integrated. It showed the sales of a particular product across locations. Some kool demos by 2 geeks who used Robots to show their development talents. 2 Robots fought with each other 2 Robots danced in sync for the A.R. Rehman song Humma Humma... A dream home project by Raman. He is currently using the same in his home too. Robots are controlling his home currently. They showed a video on this. Here are the list of activities that Robot does for him When he reads a book, robot automatically scans that and shows that image of that person in the screen (TV or comp) in front of him. It shows a wikipedia about that person. It says that person is not in linked in. do you want to add him If he sees an IPL Match news in the book and smiles it understands he is interested in that and opens a website related to that and shows the current game and the scorecard. It cooks for him It cleans the room for him whenever he leaves the house when he is doing something if some intruder comes inside his house his computer automatically switches his screen showing the video of the person coming inside. When he wakes up it automatically opens up the system, loads his mails and the news by the side, etc. Some Demos on Microsoft Pivot. This was there in livelabs but it is now available in getpivot.com its a pivoting of the pictorial data based on some categories and filters on the searches that we do. And finally on filling up some feedback forms we got T-Shirts and Microsoft Visual Studio 2010 Training Kit CDs. Whats more on TechEd??? Stay tuned!!! Will update you soon on the other happenings!! PS : I typed a lot of content for more than a hour but I pressed a backspace and it went to the previous page and all my content were lost and I was not able to retrieve the same and I typed everything again.

    Read the article

  • How to Create a Task From an Email Message in Outlook 2013

    - by Lori Kaufman
    If you need to do something related to an email message you received, you can easily create a task from the message in Outlook. A task can be created that contains all the content of the message without requiring you to re-enter the information. Creating a task in Outlook from an email message is different from flagging the message. As it says on Microsoft’s site: “When you flag an email message, the message appears in the To-Do List in Tasks and on the Tasks peek. However, if you delete the message, it also disappears from the To-Do List in Tasks and on the Tasks peek. Flagging a message doesn’t create a separate task.” Using the method described below to create a task from an email message, the task is separate from the message. The original message can be deleted or changed and the related task will not be affected. In Outlook, make sure the Mail section is active. If not, click Mail on the Navigation Bar at the bottom of the Outlook window. Then, click on the message you want to add to a task and drag it to Tasks on the Navigation Bar. A new Task window displays containing the email message and allowing you to enter the subject of the task, the Start and Due dates, Status, Priority, among other settings. When you have specified the settings for the task, click Save & Close in the Actions section of the Task tab. When the Task window closes, the Mail section is still active. If you move your mouse over Tasks on the Navigation Bar, a snippet from the new task displays in a popup window (the Task peek). Click Tasks to go to the Tasks section of Outlook. The To-Do List displays with your newly-added task listed in the middle pane. The right pane displays the details of the task and the contents of the message included in the task (as pictured at the beginning of this article). Click on Tasks to see a complete listing of all your tasks, including the one you just added from your email message. Note that attachments in an email message added to a new task are not copied to the task. You can also create new tasks by dragging contacts, calendar items, and notes to Tasks on the Navigation Bar.     

    Read the article

  • Columnstore Case Study #2: Columnstore faster than SSAS Cube at DevCon Security

    - by aspiringgeek
    Preamble This is the second in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in my big deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. See also Columnstore Case Study #1: MSIT SONAR Aggregations Why Columnstore? As stated previously, If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. The Customer DevCon Security provides home & business security services & has been in business for 135 years. I met DevCon personnel while speaking to the Utah County SQL User Group on 20 February 2012. (Thanks to TJ Belt (b|@tjaybelt) & Ben Miller (b|@DBADuck) for the invitation which serendipitously coincided with the height of ski season.) The App: DevCon Security Reporting: Optimized & Ad Hoc Queries DevCon users interrogate a SQL Server 2012 Analysis Services cube via SSRS. In addition, the SQL Server 2012 relational back end is the target of ad hoc queries; this DW back end is refreshed nightly during a brief maintenance window via conventional table partition switching. SSRS, SSAS, & MDX Conventional relational structures were unable to provide adequate performance for user interaction for the SSRS reports. An SSAS solution was implemented requiring personnel to ramp up technically, including learning enough MDX to satisfy requirements. Ad Hoc Queries Even though the fact table is relatively small—only 22 million rows & 33GB—the table was a typical DW table in terms of its width: 137 columns, any of which could be the target of ad hoc interrogation. As is common in DW reporting scenarios such as this, it is often nearly to optimize for such queries using conventional indexing. DevCon DBAs & developers attended PASS 2012 & were introduced to the marvels of columnstore in a session presented by Klaus Aschenbrenner (b|@Aschenbrenner) The Details Classic vs. columnstore before-&-after metrics are impressive. Scenario   Conventional Structures   Columnstore   Δ SSRS via SSAS 10 - 12 seconds 1 second >10x Ad Hoc 5-7 minutes (300 - 420 seconds) 1 - 2 seconds >100x Here are two charts characterizing this data graphically.  The first is a linear representation of Report Duration (in seconds) for Conventional Structures vs. Columnstore Indexes.  As is so often the case when we chart such significant deltas, the linear scale doesn’t expose some the dramatically improved values corresponding to the columnstore metrics.  Just to make it fair here’s the same data represented logarithmically; yet even here the values corresponding to 1 –2 seconds aren’t visible.  The Wins Performance: Even prior to columnstore implementation, at 10 - 12 seconds canned report performance against the SSAS cube was tolerable. Yet the 1 second performance afterward is clearly better. As significant as that is, imagine the user experience re: ad hoc interrogation. The difference between several minutes vs. one or two seconds is a game changer, literally changing the way users interact with their data—no mental context switching, no wondering when the results will appear, no preoccupation with the spinning mind-numbing hurry-up-&-wait indicators.  As we’ve commonly found elsewhere, columnstore indexes here provided performance improvements of one, two, or more orders of magnitude. Simplified Infrastructure: Because in this case a nonclustered columnstore index on a conventional DW table was faster than an Analysis Services cube, the entire SSAS infrastructure was rendered superfluous & was retired. PASS Rocks: Once again, the value of attending PASS is proven out. The trip to Charlotte combined with eager & enquiring minds let directly to this success story. Find out more about the next PASS Summit here, hosted this year in Seattle on November 4 - 7, 2014. DevCon BI Team Lead Nathan Allan provided this unsolicited feedback: “What we found was pretty awesome. It has been a game changer for us in terms of the flexibility we can offer people that would like to get to the data in different ways.” Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the second in a series of reports on columnstore implementations, results from DevCon Security, a live customer production app for which performance increased by factors of from 10x to 100x for all report queries, including canned queries as well as reducing time for results for ad hoc queries from 5 - 7 minutes to 1 - 2 seconds. As a result of columnstore performance, the customer retired their SSAS infrastructure. I invite you to consider leveraging columnstore in your own environment. Let me know if you have any questions.

    Read the article

  • iPack -The iOS Application Packager

    - by user13277780
    iOS applications are distributed in .ipa archive files. These files are regular zip files which contain application resources and executable-s. To protect them from unauthorized modifications and to provide identification of their sources, the content of the archives is signed. The signature is included in the application executable of an.ipa archive and protects the executable file itself and the associated resource files. Apple provides native Mac OS tools for signing iOS executable-s (which are actually generic Mach-O code signing tools), but these tools are not generally available on other platforms. To provide a multi-platform development environment for JavaFX based iOS applications, we ported iOS signing and packaging to Java and created a dedicated ipack tool for it. The iPack tool can be used as a last step of creating .ipa package on various operating systems. Prototype has been tested by creating a final distributable for JavaFX application that runs on iPad, all done on Windows 7. Source Code The source code of iPac tool is in OpenJFX project repository. You can find it in: <openjfx root>/rt/tools/ios/Maven/ipack To build the iPack tool use: rt/tools/ios/Maven/ipack$ mvn package After building, you can run the tool: java -jar <path to ipack.jar> <arguments>  Signing keystore The tool uses a java key store to read the signing certificate and the associated private key. To prepare such keystore users can use keytool from JDK. One possible scenario is to import an existing private key and the certificate from a key store used on Mac OS: To list the content of an existing key store and identify the source alias: keytool -list -keystore <src keystore>.p12 -storetype pkcs12 -storepass <src keystore password> To create Java key store and import the private key with its certificate to the keys store: keytool -importkeystore \ -destkeystore <dst keystore> -deststorepass <dst keystore password> \ -srckeystore <src keystore>.p12 -srcstorepass <src keystore password> -srcstoretype pkcs12 \ -srcalias <src alias> -destalias <dst alias> -destkeypass <dst key password> Another scenario would be to generate a private / public key pair directly in a Java key store and create a certificate request from it. After sending the request to Apple one can then import the certificate response back to the Java key store and complete the signing certificate entry. In both scenarios the resulting alias in the Java key store will contain only a single (leaf) certificate. This can be verified with the following command: keytool -list -v -keystore <ipack keystore> -storepass <keystore password> When looking at the Certificate chain length entry, the number next to it is 1. When an executable file is signed on Mac OS, the resulting signature (in CMS format) includes the whole certificate chain up to the Apple Root CA. The ipack tool includes only the chain which is stored under the alias specified on the command line. So to have the whole chain in the signature we need to replace the single certificate entry under the alias with the corresponding full certificate chain. To do that we need first to create the chain in a separate file. It is easy to create such chain when working with certificates in Base-64 encoded PEM format. A certificate chain can be created by concatenating PEM certificates, which should form the chain, into a single file. For iOS signing we need the following certificates in our chain: Apple Root CA Apple Worldwide Developer Relations CA Our signing leaf certificate To convert a certificate from the binary DER format (.der, .cer) to PEM format: keytool -importcert -noprompt -keystore temp.ks -storepass temppwd -alias tempcert -file <certificate>.cer keytool -exportcert -keystore temp.ks -storepass temppwd -alias tempcert -rfc -file <certificate>.pem To export the signing certificate into PEM format: keytool -exportcert -keystore <ipack keystore> -storepass <keystore password> -alias <signing alias> -rfc -file SigningCert.pem After constructing a chain from AppleIncRootCertificate.pem, AppleWWDRCA.pem andSigningCert.pem, it can be imported back into the keystore with: keytool -importcert -noprompt -keystore <ipack keystore> -storepass <keystore password> -alias <signing alias> -keypass <key password> -file SigningCertChain.pem To summarize, the following example shows the full certificate chain replacement process: keytool -importcert -noprompt -keystore temp.ks -storepass temppwd -alias tempcert1 -file AppleIncRootCertificate.cer keytool -exportcert -keystore temp.ks -storepass temppwd -alias tempcert1 -rfc -file AppleIncRootCertificate.pem keytool -importcert -noprompt -keystore temp.ks -storepass temppwd -alias tempcert2 -file AppleWWDRCA.cer keytool -exportcert -keystore temp.ks -storepass temppwd -alias tempcert2 -rfc -file AppleWWDRCA.pem keytool -exportcert -keystore ipack.ks -storepass keystorepwd -alias mycert -rfc -file SigningCert.pem cat SigningCert.pem AppleWWDRCA.pem AppleIncRootCertificate.pem >SigningCertChain.pem keytool -importcert -noprompt -keystore ipack.ks -storepass keystorepwd -alias mycert -keypass keypwd -file SigningCertChain.pem keytool -list -v -keystore ipack.ks -storepass keystorepwd Usage When the ipack tool is started with no arguments it prints the following usage information: -appname MyApplication -appid com.myorg.MyApplication     Usage: ipack <archive> <signing opts> <application opts> [ <application opts> ... ] Signing options: -keystore <keystore> keystore to use for signing -storepass <password> keystore password -alias <alias> alias for the signing certificate chain and the associated private key -keypass <password> password for the private key Application options: -basedir <directory> base directory from which to derive relative paths -appdir <directory> directory with the application executable and resources -appname <file> name of the application executable -appid <id> application identifier Example: ipack MyApplication.ipa -keystore ipack.ks -storepass keystorepwd -alias mycert -keypass keypwd -basedir mysources/MyApplication/dist -appdir Payload/MyApplication.app -appname MyApplication -appid com.myorg.MyApplication    

    Read the article

  • HTTPS Everywhere Extension Updates to Version 3.0, Adds Protection for 1,500 More Websites

    - by Asian Angel
    If one of your security goals is to encrypt your communication with websites as much as possible, then you will definitely be pleased with the latest update to the HTTPS Everywhere extension for Firefox and Chrome. This latest release adds encryption protection for an additional 1,500 websites to help make your browsing experience more secure than ever. Images shown above courtesy of EFF. You can learn more about this latest release along with installing the extension for Firefox and/or Chrome directly from the blog post linked below… HTTPS Everywhere 3.0 protects 1,500 more sites [via Softpedia] HTG Explains: What is the Windows Page File and Should You Disable It? How To Get a Better Wireless Signal and Reduce Wireless Network Interference How To Troubleshoot Internet Connection Problems

    Read the article

  • Canonical url for a home page and trailing slashes

    - by serg
    My home page could be potentially linked as: http://example.com http://example.com/ http://example.com/?ref=1 http://example.com/index.html http://example.com/index.html?ref=2 (the same page is served for all those urls) I am thinking about defining a canonical url to make sure google doesn't consider those urls to be different pages: <link rel="canonical" href="/" /> (relative) <link rel="canonical" href="http://example.com/" /> (trailing slash) <link rel="canonical" href="http://example.com" /> (no trailing slash) Which one should be used? I would just slap / but messing with canonical seems like a scary business so I wanted double check first. Is it a good idea at all for defining a canonical url for a home page?

    Read the article

  • Why is my Ubuntu system not using the correct kernel?

    - by Brooks Moses
    We're having a bit of confusion on a Ubuntu remote system -- /boot/grub/menu.lst suggests the system should boot into kernel 2.6.35-30-generic, but it is actually running kernel 2.6.32-27-generic. Where should I look to start figuring out why this is happening and how to fix it? Specifically, /boot/grub/menu.lst has default 0 and the first entry is title Ubuntu 10.10, kernel 2.6.35-30-generic uuid 67717ee3-cbf9-45d2-ae97-820256f4c4fd kernel /boot/vmlinuz-2.6.35-30-generic root=UUID=67717ee3-cbf9-45d2- ae97-820256f4c4fd ro quiet splash initrd /boot/initrd.img-2.6.35-30-generic Further, I've confirmed that /boot/vmlinuz-2.6.35-30-generic and /boot/initrd.img-2.6.35-30-generic exist and have appropriate permissions. Meanwhile, uname -a returns: $ uname -a Linux cuda2 2.6.32-27-generic #49-Ubuntu SMP Thu Dec 2 00:51:09 UTC 2010 x86_64 GNU/Linux Edit: I've also tried re-running update-grub, and rebooting; no luck. Here's the full menu.lst, as requested by a commenter: # menu.lst - See: grub(8), info grub, update-grub(8) # grub-install(8), grub-floppy(8), # grub-md5-crypt, /usr/share/doc/grub # and /usr/share/doc/grub-legacy-doc/. ## default num # Set the default entry to the entry number NUM. Numbering starts from 0, and # the entry number 0 is the default if the command is not used. # # You can specify 'saved' instead of a number. In this case, the default entry # is the entry saved with the command 'savedefault'. # WARNING: If you are using dmraid do not use 'savedefault' or your # array will desync and will not let you boot your system. default 0 ## timeout sec # Set a timeout, in SEC seconds, before automatically booting the default entry # (normally the first entry defined). timeout 3 ## hiddenmenu # Hides the menu by default (press ESC to see the menu) hiddenmenu # Pretty colours #color cyan/blue white/blue ## password ['--md5'] passwd # If used in the first section of a menu file, disable all interactive editing # control (menu entry editor and command-line) and entries protected by the # command 'lock' # e.g. password topsecret # password --md5 $1$gLhU0/$aW78kHK1QfV3P2b2znUoe/ # password topsecret # # examples # # title Windows 95/98/NT/2000 # root (hd0,0) # makeactive # chainloader +1 # # title Linux # root (hd0,1) # kernel /vmlinuz root=/dev/hda2 ro # # # Put static boot stanzas before and/or after AUTOMAGIC KERNEL LIST ### BEGIN AUTOMAGIC KERNELS LIST ## lines between the AUTOMAGIC KERNELS LIST markers will be modified ## by the debian update-grub script except for the default options below ## DO NOT UNCOMMENT THEM, Just edit them to your needs ## ## Start Default Options ## ## default kernel options ## default kernel options for automagic boot options ## If you want special options for specific kernels use kopt_x_y_z ## where x.y.z is kernel version. Minor versions can be omitted. ## e.g. kopt=root=/dev/hda1 ro ## kopt_2_6_8=root=/dev/hdc1 ro ## kopt_2_6_8_2_686=root=/dev/hdc2 ro # kopt=root=UUID=67717ee3-cbf9-45d2-ae97-820256f4c4fd ro ## default grub root device ## e.g. groot=(hd0,0) # groot=67717ee3-cbf9-45d2-ae97-820256f4c4fd ## should update-grub create alternative automagic boot options ## e.g. alternative=true ## alternative=false # alternative=true ## should update-grub lock alternative automagic boot options ## e.g. lockalternative=true ## lockalternative=false # lockalternative=false ## additional options to use with the default boot option, but not with the ## alternatives ## e.g. defoptions=vga=791 resume=/dev/hda5 # defoptions=quiet splash ## should update-grub lock old automagic boot options ## e.g. lockold=false ## lockold=true # lockold=false ## Xen hypervisor options to use with the default Xen boot option # xenhopt= ## Xen Linux kernel options to use with the default Xen boot option # xenkopt=console=tty0 ## altoption boot targets option ## multiple altoptions lines are allowed ## e.g. altoptions=(extra menu suffix) extra boot options ## altoptions=(recovery) single # altoptions=(recovery mode) single ## controls how many kernels should be put into the menu.lst ## only counts the first occurence of a kernel, not the ## alternative kernel options ## e.g. howmany=all ## howmany=7 # howmany=all ## specify if running in Xen domU or have grub detect automatically ## update-grub will ignore non-xen kernels when running in domU and vice versa ## e.g. indomU=detect ## indomU=true ## indomU=false # indomU=detect ## should update-grub create memtest86 boot option ## e.g. memtest86=true ## memtest86=false # memtest86=true ## should update-grub adjust the value of the default booted system ## can be true or false # updatedefaultentry=false ## should update-grub add savedefault to the default options ## can be true or false # savedefault=false ## ## End Default Options ## title Ubuntu 10.10, kernel 2.6.35-30-generic uuid 67717ee3-cbf9-45d2-ae97-820256f4c4fd kernel /boot/vmlinuz-2.6.35-30-generic root=UUID=67717ee3-cbf9-45d2-ae97-820256f4c4fd ro quiet splash initrd /boot/initrd.img-2.6.35-30-generic title Ubuntu 10.10, kernel 2.6.35-30-generic (recovery mode) uuid 67717ee3-cbf9-45d2-ae97-820256f4c4fd kernel /boot/vmlinuz-2.6.35-30-generic root=UUID=67717ee3-cbf9-45d2-ae97-820256f4c4fd ro single initrd /boot/initrd.img-2.6.35-30-generic title Ubuntu 10.10, kernel 2.6.32-32-server uuid 67717ee3-cbf9-45d2-ae97-820256f4c4fd kernel /boot/vmlinuz-2.6.32-32-server root=UUID=67717ee3-cbf9-45d2-ae97-820256f4c4fd ro quiet splash initrd /boot/initrd.img-2.6.32-32-server title Ubuntu 10.10, kernel 2.6.32-32-server (recovery mode) uuid 67717ee3-cbf9-45d2-ae97-820256f4c4fd kernel /boot/vmlinuz-2.6.32-32-server root=UUID=67717ee3-cbf9-45d2-ae97-820256f4c4fd ro single initrd /boot/initrd.img-2.6.32-32-server title Ubuntu 10.10, kernel 2.6.32-27-generic uuid 67717ee3-cbf9-45d2-ae97-820256f4c4fd kernel /boot/vmlinuz-2.6.32-27-generic root=UUID=67717ee3-cbf9-45d2-ae97-820256f4c4fd ro quiet splash initrd /boot/initrd.img-2.6.32-27-generic title Ubuntu 10.10, kernel 2.6.32-27-generic (recovery mode) uuid 67717ee3-cbf9-45d2-ae97-820256f4c4fd kernel /boot/vmlinuz-2.6.32-27-generic root=UUID=67717ee3-cbf9-45d2-ae97-820256f4c4fd ro single initrd /boot/initrd.img-2.6.32-27-generic title Chainload into GRUB 2 root 67717ee3-cbf9-45d2-ae97-820256f4c4fd kernel /boot/grub/core.img title Ubuntu 10.10, memtest86+ uuid 67717ee3-cbf9-45d2-ae97-820256f4c4fd kernel /boot/memtest86+.bin ### END DEBIAN AUTOMAGIC KERNELS LIST To add complication and joy to my life, this is a desktop machine in a remote datacenter; we don't have either local access or serial-console access. Suggestions?

    Read the article

  • Is it possible to prioritize which folders get synced first when using Ubuntu One?

    - by Philippe
    I face the problem that u1 syncs my files to a given order. I'd like to change that order. Consider that: On a week end I work and I may also copy the content of my photo SD card onto my notebook. The next time I boot my work computer, I might be sitting there and waiting for some hours until U1 synced/downloaded all the photos to my workstation and the files I need for work are the last in the '--waiting' list. I don't mind if Ubuntu One is a slow downloader, I would be just happy if I could define that all files in a certain folder (and all of it subfolders) always need to be downloaded first. I'm aware that there was once the possibility to move some files to the beginning of the sync list. But that was a very clumsy way with providing the folder id etc. and in the current version of u1 I can't even find it any more. Any suggestions on how to prioritize always the same folder?

    Read the article

  • Waterfall Model (SDLC) vs. Prototyping Model

    The characters in the fable of the Tortoise and the Hare can easily be used to demonstrate the similarities and differences between the Waterfall and Prototyping software development models. This children fable is about a race between a consistently slow moving but steadfast turtle and an extremely fast but unreliable rabbit. After closely comparing each character’s attributes in correlation with both software development models, a trend seems to appear in that the Waterfall closely resembles the Tortoise in that Waterfall Model is typically a slow moving process that is broken up in to multiple sequential steps that must be executed in a standard linear pattern. The Tortoise can be quoted several times in the story saying “Slow and steady wins the race.” This is the perfect mantra for the Waterfall Model in that this model is seen as a cumbersome and slow moving. Waterfall Model Phases Requirement Analysis & Definition This phase focuses on defining requirements for a project that is to be developed and determining if the project is even feasible. Requirements are collected by analyzing existing systems and functionality in correlation with the needs of the business and the desires of the end users. The desired output for this phase is a list of specific requirements from the business that are to be designed and implemented in the subsequent steps. In addition this phase is used to determine if any value will be gained by completing the project. System Design This phase focuses primarily on the actual architectural design of a system, and how it will interact within itself and with other existing applications. Projects at this level should be viewed at a high level so that actual implementation details are decided in the implementation phase. However major environmental decision like hardware and platform decision are typically decided in this phase. Furthermore the basic goal of this phase is to design an application at the system level in those classes, interfaces, and interactions are defined. Additionally decisions about scalability, distribution and reliability should also be considered for all decisions. The desired output for this phase is a functional  design document that states all of the architectural decisions that have been made in regards to the project as well as a diagrams like a sequence and class diagrams. Software Design This phase focuses primarily on the refining of the decisions found in the functional design document. Classes and interfaces are further broken down in to logical modules based on the interfaces and interactions previously indicated. The output of this phase is a formal design document. Implementation / Coding This phase focuses primarily on implementing the previously defined modules in to units of code. These units are developed independently are intergraded as the system is put together as part of a whole system. Software Integration & Verification This phase primarily focuses on testing each of the units of code developed as well as testing the system as a whole. There are basic types of testing at this phase and they include: Unit Test and Integration Test. Unit Test are built to test the functionality of a code unit to ensure that it preforms its desired task. Integration testing test the system as a whole because it focuses on results of combining specific units of code and validating it against expected results. The output of this phase is a test plan that includes test with expected results and actual results. System Verification This phase primarily focuses on testing the system as a whole in regards to the list of project requirements and desired operating environment. Operation & Maintenance his phase primarily focuses on handing off the competed project over to the customer so that they can verify that all of their requirements have been met based on their original requirements. This phase will also validate the correctness of their requirements and if any changed need to be made. In addition, any problems not resolved in the previous phase will be handled in this section. The Waterfall Model’s linear and sequential methodology does offer a project certain advantages and disadvantages. Advantages of the Waterfall Model Simplistic to implement and execute for projects and/or company wide Limited demand on resources Large emphasis on documentation Disadvantages of the Waterfall Model Completed phases cannot be revisited regardless if issues arise within a project Accurate requirement are never gather prior to the completion of the requirement phase due to the lack of clarification in regards to client’s desires. Small changes or errors that arise in applications may cause additional problems The client cannot change any requirements once the requirements phase has been completed leaving them no options for changes as they see their requirements changes as the customers desires change. Excess documentation Phases are cumbersome and slow moving Learn more about the Major Process in the Sofware Development Life Cycle and Waterfall Model. Conversely, the Hare shares similar traits with the prototyping software development model in that ideas are rapidly converted to basic working examples and subsequent changes are made to quickly align the project with customers desires as they are formulated and as software strays from the customers vision. The basic concept of prototyping is to eliminate the use of well-defined project requirements. Projects are allowed to grow as the customer needs and request grow. Projects are initially designed according to basic requirements and are refined as requirement become more refined. This process allows customer to feel their way around the application to ensure that they are developing exactly what they want in the application This model also works well for determining the feasibility of certain approaches in regards to an application. Prototypes allow for quickly developing examples of implementing specific functionality based on certain techniques. Advantages of Prototyping Active participation from users and customers Allows customers to change their mind in specifying requirements Customers get a better understanding of the system as it is developed Earlier bug/error detection Promotes communication with customers Prototype could be used as final production Reduced time needed to develop applications compared to the Waterfall method Disadvantages of Prototyping Promotes constantly redefining project requirements that cause major system rewrites Potential for increased complexity of a system as scope of the system expands Customer could believe the prototype as the working version. Implementation compromises could increase the complexity when applying updates and or application fixes When companies trying to decide between the Waterfall model and Prototype model they need to evaluate the benefits and disadvantages for both models. Typically smaller companies or projects that have major time constraints typically head for more of a Prototype model approach because it can reduce the time needed to complete the project because there is more of a focus on building a project and less on defining requirements and scope prior to the start of a project. On the other hand, Companies with well-defined requirements and time allowed to generate proper documentation should steer towards more of a waterfall model because they are in a position to obtain clarified requirements and have to design and optimal solution prior to the start of coding on a project.

    Read the article

  • How valuable are you to your organization?

    - by Lance Shaw
    I don't know about you but I find it easy to get bogged down with the daily list of tasks and deliverables.  We all have lots to do and it all seems to be due tomorrow.  If you are reading this blog, than your to-do list is almost certainly filled with tasks related to the management, processing and publishing of information.  As we get mired in the daily routine of making sure that the content management needs of the organizations are met, we can easily lose sight of the value that we bring.  After all, if information and content is the lifeblood of our organizations, then surely maintaining the healthy flow of that information has real value.  But how can you measure that value and bring it forward on your résumé or your list of achievements in time for your next performance review? The AIIM organization has spent a lot of time recently researching the value of certification for "information professionals".  When it comes to enterprise content management (ECM) there are many areas of specialization including records management, content archivist, digital asset manager, content librarian and more.  Specialization can clearly drive up your value but it can also lock you into a narrow niche area of focus.  AIIM has found that what companies also need is someone that can apply their knowledge of how information is managed within the operational scope of the business in order to drive real, measurable strategic value.  When you can showcase the value of a broader, business-wide mindset to your management, you have more opportunity to make professional progress and drive real growth where it counts, your paycheck.   We here on the Oracle WebCenter team partnered with AIIM on the research they performed around the value of an information professional certification program. In a webinar this week, Doug Miles of AIIM and I will be talking about the results of that recent survey and what it is going to mean in the future to be recognized as a "Certified Information Professional" (CIP).  Oracle sponsored this research to help individuals and companies understand the value of enterprise content management and what it means across the entire organization. I hope you will join us. If any of us were stopped in the street and were asked about it, I bet most of us would think of ourselves as an "Information Professional".  Now we have a way to actually prove it!  There's only one downside that I can see...  you will have to get your business cards updated to include the "CIP" acronym after your name.  I think you will agree that is a price worth paying!

    Read the article

  • What is the best solution for document archiving?

    - by Anders Wallenquist
    I'm looking for a utility that helps me (and my colleagues) to archive documents in a systematic manner (Like Zeitgeist but permanent). The utility have to clean-out old document from desktops and store them on a server (as automatic as possible and consistent) maybe from just a few locations (Document directory) Documents shall be stored on cheap large media for many years to come - hard disk and file system maybe? Easy to maintain and manage for a small organization. Documents have to be easy to find and restore One systematic manner could be a directory-structure by year, month, user or user, year, month. Its a plus if documents could be linked to a project, if documents could be search-able and if document could also be mail, IM-discussions not only OpenOffice traditional documents. Any ideas?

    Read the article

  • Azure Mobile Services: available modules

    - by svdoever
    Azure Mobile Services has documented a set of objects available in your Azure Mobile Services server side scripts at their documentation page Mobile Services server script reference. Although the documented list is a nice list of objects for the common things you want to do, it will be sooner than later that you will look for more functionality to be included in your script, especially with the new provided feature that you can now create your custom API’s. If you use GIT it is now possible to add any NPM module (node package manager module, say the NuGet of the node world), but why include a module if it is already available out of the box. And you can only use GIT with Azure Mobile Services if you are an administrator on your Azure Mobile Service, not if you are a co-administrator (will be solved in the future). Until now I did some trial and error experimentation to test if a certain module was available. This is easiest to do as follows:   Create a custom API, for example named experiment. In this API use the following code: exports.get = function (request, response) { var module = "nonexistingmodule"; var m = require(module); response.send(200, "Module '%s' found.", module); }; You can now test your service with the following request in your browser: https://yourservice.azure-mobile.net/api/experiment If you get the result: {"code":500,"error":"Error: Internal Server Error"} you know that the module does not exist. In your logs you will find the following error: Error in script '/api/experiment.json'. Error: Cannot find module 'nonexistingmodule' [external code] atC:\DWASFiles\Sites\yourservice\VirtualDirectory0\site\wwwroot\App_Data\config\scripts\api\experiment.js:3:13[external code] If you require an existing (undocumented) module like the OAuth module in the following code, you will get success as a result: exports.get = function (request, response) { var module = "oauth"; var m = require(module); response.send(200, "Module '" + module + "' found."); }; If we look at the standard node.js documentation we see an extensive list of modules that can be used from your code. If we look at the list of files available in the Azure Mobile Services platform as documented in the blog post Azure Mobile Services: what files does it consist of? we see a folder node_modules with many more modules are used to build the Azure Mobile Services functionality on, but that can also be utilized from your server side node script code: apn - An interface to the Apple Push Notification service for Node.js. dpush - Send push notifications to Android devices using GCM. mpns - A Node.js interface to the Microsoft Push Notification Service (MPNS) for Windows Phone. wns - Send push notifications to Windows 8 devices using WNS. pusher - Node library for the Pusher server API (see also: http://pusher.com/) azure - Windows Azure Client Library for node. express - Sinatra inspired web development framework. oauth - Library for interacting with OAuth 1.0, 1.0A, 2 and Echo. Provides simplified client access and allows for construction of more complex apis and OAuth providers. request - Simplified HTTP request client. sax - An evented streaming XML parser in JavaScript sendgrid - A NodeJS implementation of the SendGrid Api. sqlserver – In node repository known as msnodesql - Microsoft Driver for Node.js for SQL Server. tripwire - Break out from scripts blocking node.js event loop. underscore - JavaScript's functional programming helper library. underscore.string - String manipulation extensions for Underscore.js javascript library. xml2js - Simple XML to JavaScript object converter. xmlbuilder - An XML builder for node.js. As stated before, many of these modules are used to provide the functionality of Azure Mobile Services platform, and in general should not be used directly. On the other hand, I needed OAuth badly to authenticate to the new v1.1 services of Twitter, and was very happy that a require('oauth') and a few lines of code did the job. Based on the above modules and a lot of code in the other javascript files in the Azure Mobile Services platform a set of global objects is provided that can be used from your server side node.js script code. In future blog posts I will go into more details with respect to how this code is built-up, all starting at the node.js express entry point app.js.

    Read the article

  • Fast programmatic compare of "timetable" data

    - by Brendan Green
    Consider train timetable data, where each service (or "run") has a data structure as such: public class TimeTable { public int Id {get;set;} public List<Run> Runs {get;set;} } public class Run { public List<Stop> Stops {get;set;} public int RunId {get;set;} } public class Stop { public int StationId {get;set;} public TimeSpan? StopTime {get;set;} public bool IsStop {get;set;} } We have a list of runs that operate against a particular line (the TimeTable class). Further, whilst we have a set collection of stations that are on a line, not all runs stop at all stations (that is, IsStop would be false, and StopTime would be null). Now, imagine that we have received the initial timetable, processed it, and loaded it into the above data structure. Once the initial load is complete, it is persisted into a database - the data structure is used only to load the timetable from its source and to persist it to the database. We are now receiving an updated timetable. The updated timetable may or may not have any changes to it - we don't know and are not told whether any changes are present. What I would like to do is perform a compare for each run in an efficient manner. I don't want to simply replace each run. Instead, I want to have a background task that runs periodically that downloads the updated timetable dataset, and then compares it to the current timetable. If differences are found, some action (not relevant to the question) will take place. I was initially thinking of some sort of checksum process, where I could, for example, load both runs (that is, the one from the new timetable received and the one that has been persisted to the database) into the data structure and then add up all the hour components of the StopTime, and all the minute components of the StopTime and compare the results (i.e. both the sum of Hours and sum of Minutes would be the same, and differences introduced if a stop time is changed, a stop deleted or a new stop added). Would that be a valid way to check for differences, or is there a better way to approach this problem? I can see a problem that, for example, one stop is changed to be 2 minutes earlier, and another changed to be 2 minutes later would have a net zero change. Or am I over thinking this, and would it just be simpler to brute check all stops to ensure that The updated run stops at the same stations; and Each stop is at the same time

    Read the article

  • Building Publishing Pages in Code

    - by David Jacobus
    Originally posted on: http://geekswithblogs.net/djacobus/archive/2013/10/27/154478.aspxOne of the Mantras we developers try to follow: Ensure that the solution package we deliver to the client is complete.  We build Web Parts, Master Pages, Images, CSS files and other artifacts that we push to the client with a WSP (Solution Package) And then we have them finish the solution by building their site pages by adding the web parts to the site pages.       I am a proponent that we,  the developers,  should minimize this time consuming work and build these site pages in code.  I found a few blogs and some MSDN documentation but not really a complete solution that has all these artifacts working in one solution.   What I am will discuss and provide a solution for is a package that has: 1.  Master Page 2.  Page Layout 3.  Page Web Parts 4.  Site Pages   Most all done in code without the development team or the developers having to finish up the site building process spending a few hours or days completing the site!  I am not implying that in Development we do this. In fact,  we build these pages incrementally testing our web parts, etc. I am saying that the final action in our solution is that we take all these artifacts and add them to the site pages in code, the client then only needs to activate a few features and VIOLA their site appears!.  I had a project that had me build 8 pages like this as part of the solution.   In this blog post, I am taking a master page solution that I have called DJGreenMaster.  On My Office 365 Development Site it looks like this:     It is a generic master page for a SharePoint 2010 site Along with a three column layout.  Centered with a footer that uses a SharePoint List and Web Part for the footer links.  I use this master page a lot in my site development!  Easy to change the color and site logo with a little CSS.   I am going to add a few web parts for discussion purposes and then add these web parts to a site page in code.    Lets look at the solution package for DJ Green Master as that will be the basis project for building the site pages:   What you are seeing  is a complete solution to add a Master Page to a site collection which contains: 1.  Master Page Module which contains the Master Page and Page Layout 2.  The Footer Module to add the Footer Web Part 3.  Miscellaneous modules to add images, JQuery, CSS and subsite page 4.  3 features and two feature event receivers: a.  DJGreenCSS, used to add the master page CSS file to Style Sheet Library and an Event Receiver to check it in. b.  DJGreenMaster used to add the Master Page and Page Layout.  In an Event Receiver change the master page to DJGreenMaster , create the footer list and check the files in. c.  DJGreenMasterWebParts add the Footer Web Part to the site collection. I won’t go over the code for this as I will give it to you at the end of this blog post. I have discussed creating a list in code in a previous post.  So what we have is the basis to begin what is germane to this discussion.  I have the first two requirements completed.  I need now to add page web parts and the build the pages in code.  For the page web parts, I will use one downloaded from Codeplex which does not use a SharePoint custom list for simplicity:   Weather Web Part and another downloaded from MSDN which is a SharePoint Custom Calendar Web Part, I had to add some functionality to make the events color coded to exceed the built-in 10 overlays using JQuery!    Here is the solution with the added projects:     Here is a screen shot of the Weather Web Part Deployed:   Here is a screen shot of the Site Calendar with JQuery:     Okay, Now we get to the final item:  To create Publishing pages.   We need to add a feature receiver to the DJGreenMaster project I will name it DJSitePages and also add a Event Receiver:       We will build the page at the site collection level and all of the code necessary will be contained in the event receiver.   Added a reference to the Microsoft.SharePoint.Publishing.dll contained in the ISAPI folder of the 14 Hive.   First we will add some static methods from which we will call  in our Event Receiver:   1: private static void checkOut(string pagename, PublishingPage p) 2: { 3: if (p.Name.Equals(pagename, StringComparison.InvariantCultureIgnoreCase)) 4: { 5: 6: if (p.ListItem.File.CheckOutType == SPFile.SPCheckOutType.None) 7: { 8: p.CheckOut(); 9: } 10:   11: if (p.ListItem.File.CheckOutType == SPFile.SPCheckOutType.Online) 12: { 13: p.CheckIn("initial"); 14: p.CheckOut(); 15: } 16: } 17: } 18: private static void checkin(PublishingPage p,PublishingWeb pw) 19: { 20: SPFile publishFile = p.ListItem.File; 21:   22: if (publishFile.CheckOutType != SPFile.SPCheckOutType.None) 23: { 24:   25: publishFile.CheckIn( 26:   27: "CheckedIn"); 28:   29: publishFile.Publish( 30:   31: "published"); 32: } 33: // In case of content approval, approve the file need to add 34: //pulishing site 35: if (pw.PagesList.EnableModeration) 36: { 37: publishFile.Approve("Initial"); 38: } 39: publishFile.Update(); 40: }   In a Publishing Site, CheckIn and CheckOut  are required when dealing with pages in a publishing site.  Okay lets look at the Feature Activated Event Receiver: 1: public override void FeatureActivated(SPFeatureReceiverProperties properties) 2: { 3:   4:   5:   6: object oParent = properties.Feature.Parent; 7:   8:   9:   10: if (properties.Feature.Parent is SPWeb) 11: { 12:   13: currentWeb = (SPWeb)oParent; 14:   15: currentSite = currentWeb.Site; 16:   17: } 18:   19: else 20: { 21:   22: currentSite = (SPSite)oParent; 23:   24: currentWeb = currentSite.RootWeb; 25:   26: } 27: 28:   29: //create the publishing pages 30: CreatePublishingPage(currentWeb, "Home.aspx", "ThreeColumnLayout.aspx","Home"); 31: //CreatePublishingPage(currentWeb, "Dummy.aspx", "ThreeColumnLayout.aspx","Dummy"); 32: }     Basically we are calling the method Create Publishing Page with parameters:  Current Web, Name of the Page, The Page Layout, Title of the page.  Let’s look at the Create Publishing Page method:   1:   2: private void CreatePublishingPage(SPWeb site, string pageName, string pageLayoutName, string title) 3: { 4: PublishingSite pubSiteCollection = new PublishingSite(site.Site); 5: PublishingWeb pubSite = null; 6: if (pubSiteCollection != null) 7: { 8: // Assign an object to the pubSite variable 9: if (PublishingWeb.IsPublishingWeb(site)) 10: { 11: pubSite = PublishingWeb.GetPublishingWeb(site); 12: } 13: } 14: // Search for the page layout for creating the new page 15: PageLayout currentPageLayout = FindPageLayout(pubSiteCollection, pageLayoutName); 16: // Check or the Page Layout could be found in the collection 17: // if not (== null, return because the page has to be based on 18: // an excisting Page Layout 19: if (currentPageLayout == null) 20: { 21: return; 22: } 23:   24: 25: PublishingPageCollection pages = pubSite.GetPublishingPages(); 26: foreach (PublishingPage p in pages) 27: { 28: //The page allready exists 29: if ((p.Name == pageName)) return; 30:   31: } 32: 33:   34:   35: PublishingPage newPage = pages.Add(pageName, currentPageLayout); 36: newPage.Description = pageName.Replace(".aspx", ""); 37: // Here you can set some properties like: 38: newPage.IncludeInCurrentNavigation = true; 39: newPage.IncludeInGlobalNavigation = true; 40: newPage.Title = title; 41: 42: 43:   44:   45: 46:   47: //build the page 48:   49: 50: switch (pageName) 51: { 52: case "Homer.aspx": 53: checkOut("Courier.aspx", newPage); 54: BuildHomePage(site, newPage); 55: break; 56:   57:   58: default: 59: break; 60: } 61: // newPage.Update(); 62: //Now we can checkin the newly created page to the “pages” library 63: checkin(newPage, pubSite); 64: 65: 66: }     The narrative in what is going on here is: 1.  We need to find out if we are dealing with a Publishing Web.  2.  Get the Page Layout 3.  Create the Page in the pages list. 4.  Based on the page name we build that page.  (Here is where we can add all the methods to build multiple pages.) In the switch we call Build Home Page where all the work is done to add the web parts.  Prior to adding the web parts we need to add references to the two web part projects in the solution. using WeatherWebPart.WeatherWebPart; using CSSharePointCustomCalendar.CustomCalendarWebPart;   We can then reference them in the Build Home Page method.   Let’s look at Build Home Page: 1:   2: private static void BuildHomePage(SPWeb web, PublishingPage pubPage) 3: { 4: // build the pages 5: // Get the web part manager for each page and do the same code as below (copy and paste, change to the web parts for the page) 6: // Part Description 7: SPLimitedWebPartManager mgr = web.GetLimitedWebPartManager(web.Url + "/Pages/Home.aspx", System.Web.UI.WebControls.WebParts.PersonalizationScope.Shared); 8: WeatherWebPart.WeatherWebPart.WeatherWebPart wwp = new WeatherWebPart.WeatherWebPart.WeatherWebPart() { ChromeType = PartChromeType.None, Title = "Todays Weather", AreaCode = "2504627" }; 9: //Dictionary<string, string> wwpDic= new Dictionary<string, string>(); 10: //wwpDic.Add("AreaCode", "2504627"); 11: //setWebPartProperties(wwp, "WeatherWebPart", wwpDic); 12:   13: // Add the web part to a pagelayout Web Part Zone 14: mgr.AddWebPart(wwp, "g_685594D193AA4BBFABEF2FB0C8A6C1DD", 1); 15:   16: CSSharePointCustomCalendar.CustomCalendarWebPart.CustomCalendarWebPart cwp = new CustomCalendarWebPart() { ChromeType = PartChromeType.None, Title = "Corporate Calendar", listName="CorporateCalendar" }; 17:   18: mgr.AddWebPart(cwp, "g_20CBAA1DF45949CDA5D351350462E4C6", 1); 19:   20:   21: pubPage.Update(); 22:   23: } Here is what we are doing: 1.  We got  a reference to the SharePoint Limited Web Part Manager and linked/referenced Home.aspx  2.  Instantiated the a new Weather Web Part and used the Manager to add it to the page in a web part zone identified by ID,  thus the need for a Page Layout where the developer knows the ID’s. 3.  Instantiated the Calendar Web Part and used the Manager to add it to the page. 4. We the called the Publishing Page update method. 5.  Lastly, the Create Publishing Page method checks in the page just created.   Here is a screen shot of the page right after a deploy!       Okay!  I know we could make a home page look much better!  However, I built this whole Integrated solution in less than a day with the caveat that the Green Master was already built!  So what am I saying?  Build you web parts, master pages, etc.  At the very end of the engagement build the pages.  The client will be very happy!  Here is the code for this solution Code

    Read the article

  • Advice on designing a robust program to handle a large library of meta-information & programs

    - by Sam Bryant
    So this might be overly vague, but here it is anyway I'm not really looking for a specific answer, but rather general design principles or direction towards resources that deal with problems like this. It's one of my first large-scale applications, and I would like to do it right. Brief Explanation My basic problem is that I have to write an application that handles a large library of meta-data, can easily modify the meta-data on-the-fly, is robust with respect to crashing, and is very efficient. (Sorta like the design parameters of iTunes, although sometimes iTunes performs more poorly than I would like). If you don't want to read the details, you can skip the rest Long Explanation Specifically I am writing a program that creates a library of image files and meta-data about these files. There is a list of tags that may or may not apply to each image. The program needs to be able to add new images, new tags, assign tags to images, and detect duplicate images, all while operating. The program contains an image Viewer which has tagging operations. The idea is that if a given image A is viewed while the library has tags T1, T2, and T3, then that image will have boolean flags for each of those tags (depending on whether the user tagged that image while it was open in the Viewer). However, prior to being viewed in the Viewer, image A would have no value for tags T1, T2, and T3. Instead it would have a "dirty" flag indicating that it is unknown whether or not A has these tags or not. The program can introduce new tags at any time (which would automatically set all images to "dirty" with respect to this new tag) This program must be fast. It must be easily able to pull up a list of images with or without a certain tag as well as images which are "dirty" with respect to a tag. It has to be crash-safe, in that if it suddenly crashes, all of the tagging information done in that session is not lost (though perhaps it's okay to loose some of it) Finally, it has to work with a lot of images (10,000) I am a fairly experienced programmer, but I have never tried to write a program with such demanding needs and I have never worked with databases. With respect to the meta-data storage, there seem to be a few design choices: Choice 1: Invidual meta-data vs centralized meta-data Individual Meta-Data: have a separate meta-data file for each image. This way, as soon as you change the meta-data for an image, it can be written to the hard disk, without having to rewrite the information for all of the other images. Centralized Meta-Data: Have a single file to hold the meta-data for every file. This would probably require meta-data writes in intervals as opposed to after every change. The benefit here is that you could keep a centralized list of all images with a given tag, ect, making the task of pulling up all images with a given tag very efficient

    Read the article

  • How to handle gender and sexual orientation on a form with select boxes more inclusively?

    - by Drew
    The existing question and the answers for it are not satisfying when one wants to be more inclusive. Gender as Male, Female or No Answer works for some sites, but not others. Taking the view that Gender is not the same as Sex (assigned at birth based on bodily characteristics). Would it be more inclusive to include these two options: Transgender Male, Transgender Female? So instead would more inclusive options be the following? Gender Identity Male Female Transgender Male Transgender Female No Answer Sexual Orientation Straight Lesbian Gay Bisexual Asexual No Answer Gender Expression could also be included, but I think most people would find that too confusing (defined on GLAAD's website linked below) I'm going off what I've read on GLAAD's Transgender Glossary of Terms. Thanks!

    Read the article

< Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >