Search Results

Search found 66560 results on 2663 pages for 'value type'.

Page 418/2663 | < Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >

  • Help with "Cannot find ContentTypeReader BB.HeightMapInfoReader, BB, Version=1.0.0.0, Culture=neutral." needed

    - by rFactor
    Hi, I have this irritating problem in XNA that I have spent my Saturday with: Cannot find ContentTypeReader BB.HeightMapInfoReader, BB, Version=1.0.0.0, Culture=neutral. It throws me that when I do (within the game assembly's Renderer.cs class): this.terrain = this.game.Content.Load<Model>("heightmap"); There is a heightmap.bmp and I don't think there's anything wrong with it, because I used it in a previous version which I switched to this new better system. So, I have a GeneratedGeometryPipeline assembly that has these classes: HeightMapInfoContent, HeightMapInfoWriter, TerrainProcessor. The GeneratedGeometryPipeline assembly does not reference any other assemblies under the solution. Then I have the game assembly that neither references any other solution assemblies and has these classes: HeightMapInfo, HeightMapInfoReader. All game assembly classes are under namespace BB and the GeneratedGeometryPipeline classes are under the namespace GeneratedGeometryPipeline. I do not understand why it does not find it. Here's some code from the GeneratedGeometryPipeline.HeightMapInfoWriter: /// <summary> /// A TypeWriter for HeightMapInfo, which tells the content pipeline how to save the /// data in HeightMapInfo. This class should match HeightMapInfoReader: whatever the /// writer writes, the reader should read. /// </summary> [ContentTypeWriter] public class HeightMapInfoWriter : ContentTypeWriter<HeightMapInfoContent> { protected override void Write(ContentWriter output, HeightMapInfoContent value) { output.Write(value.TerrainScale); output.Write(value.Height.GetLength(0)); output.Write(value.Height.GetLength(1)); foreach (float height in value.Height) { output.Write(height); } foreach (Vector3 normal in value.Normals) { output.Write(normal); } } /// <summary> /// Tells the content pipeline what CLR type the /// data will be loaded into at runtime. /// </summary> public override string GetRuntimeType(TargetPlatform targetPlatform) { return "BB.HeightMapInfo, BB, Version=1.0.0.0, Culture=neutral"; } /// <summary> /// Tells the content pipeline what worker type /// will be used to load the data. /// </summary> public override string GetRuntimeReader(TargetPlatform targetPlatform) { return "BB.HeightMapInfoReader, BB, Version=1.0.0.0, Culture=neutral"; } } Can someone help me out?

    Read the article

  • Designing binary operations(AND, OR, NOT) in graphs DB's like neo4j

    - by Nicholas
    I'm trying to create a recipe website using a graph database, specifically neo4j using spring-data-neo4j, to try and see what can be done in Graph Databases. My model so far is: (Chef)-[HAS_INGREDIENT]->(Ingredient) (Chef)-[HAS_VALUE]->(Value) (Ingredient)-[HAS_INGREDIENT_VALUE]->(Value) (Recipe)-[REQUIRES_INGREDIENT]->(Ingredient) (Recipe)-[REQUIRES_VALUE]->(Value) I have this set up so I can do things like have the "chef" enter ingredients they have on hand, and suggest recipes, as well as suggest recipes that are close matches, but missing one ingredient. Some recipes can get complex, utilizing AND, OR, and NOT type logic, something like (Milk AND (Butter OR spread OR (vegetable oil OR olive oil))) and I'm wondering if it would be sane to model this in a graph using a tree type representation? An example of what I was thinking is to create three "node" types of AND, OR, and NOT and have each of them connect to the nodes value underneath. How else might this be represented in a Graph Database or is my example above a decent representation?

    Read the article

  • Hear about Oracle Supply Chain at Pella, Apr 27-29 '10

    - by [email protected]
    Oracle Customer Showcase - Apr 27-29'10 Featuring Pella Corp. Delivering Greater Customer Value "Discovering the Lean Value Chain" Pella is once again hosting Oracle customers at a mega-reference event in Pella, Iowa, on April 27-29. The agenda features a cross-stack set of topics and issues, including strategies for delivering customer value, improving the customer experience, Value Chain Planning / Manufacturing / Enterprise Performance Management, and Lean practices. Several executives will keynote, including Pella CIO Steve Printz. The event includes a demo grounds, round-table discussion groups, plant tours, and networking opportunities. !  

    Read the article

  • How to rebuild fstab automatically

    - by yvoyer
    I accidentally removed all the entries from the fstab files while doing a backup (Yeah, I know ;)). I would like to know if there is a way to rebuild it with the current mount options, since I did not restart the server since the deletion. If there is no such program, could anybody help me rebuild it. Using this, I have found the command to show the current setup, but I don't know what to do with it. $ sudo blkid /dev/sda1: UUID="3fc55e0f-a9b3-4229-9e76-ca95b4825a40" TYPE="ext4" /dev/sda5: UUID="718e611d-b8a3-4f02-a0cc-b3025d8db54d" TYPE="swap" /dev/sdb1: LABEL="Files_Server_Int" UUID="02fc2eda-d9fb-47fb-9e60-5fe3073e5b55" TYPE="ext4" /dev/sdc1: UUID="41e60bc2-2c9c-4104-9649-6b513919df4a" TYPE="ext4" /dev/sdd1: LABEL="Expansion Drive" UUID="782042B920427E5E" TYPE="ntfs" $ cat /etc/mtab /dev/sda1 / ext4 rw,errors=remount-ro 0 0 proc /proc proc rw,noexec,nosuid,nodev 0 0 none /sys sysfs rw,noexec,nosuid,nodev 0 0 none /sys/fs/fuse/connections fusectl rw 0 0 none /sys/kernel/debug debugfs rw 0 0 none /sys/kernel/security securityfs rw 0 0 none /dev devtmpfs rw,mode=0755 0 0 none /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0 none /dev/shm tmpfs rw,nosuid,nodev 0 0 none /var/run tmpfs rw,nosuid,mode=0755 0 0 none /var/lock tmpfs rw,noexec,nosuid,nodev 0 0 none /lib/init/rw tmpfs rw,nosuid,mode=0755 0 0 none /var/lib/ureadahead/debugfs debugfs rw,relatime 0 0 /dev/sdc1 /home ext4 rw 0 0 /dev/sdb1 /media/Files_Server ext4 rw 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,noexec,nosuid,nodev 0 0 /dev/sdd1 /media/Expansion\040Drive fuseblk rw,nosuid,nodev,allow_other,blksize=4096,default_permissions 0 0 gvfs-fuse-daemon /home/yvoyer/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,user=yvoyer 0 0 /dev/sdd1 /media/Backup500 fuseblk rw,nosuid,nodev,sync,allow_other,blksize=4096,default_permissions 0 0 /dev/sr0 /media/DIR-615 iso9660 ro,nosuid,nodev,uhelper=udisks,uid=1000,gid=1000,iocharset=utf8,mode=0400,dmode=0500 0 0 gvfs-fuse-daemon /home/cdrapeau/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,user=cdrapeau 0 0

    Read the article

  • Silverlight 4 Twitter Client &ndash; Part 6

    - by Max
    In this post, we are going to look into implementing lists into our twitter application and also about enhancing the data grid to display the status messages in a pleasing way with the profile images. Twitter lists are really cool feature that they recently added, I love them and I’ve quite a few lists setup one for DOTNET gurus, SQL Server gurus and one for a few celebrities. You can follow them here. Now let us move onto our tutorial. 1) Lists can be subscribed to in two ways, one can be user’s own lists, which he has created and another one is the lists that the user is following. Like for example, I’ve created 3 lists myself and I am following 1 other lists created by another user. Both of them cannot be fetched in the same api call, its a two step process. 2) In the TwitterCredentialsSubmit method we’ve in Home.xaml.cs, let us do the first api call to get the lists that the user has created. For this the call has to be made to https://twitter.com/<TwitterUsername>/lists.xml. The API reference is available here. myService1.AllowReadStreamBuffering = true; myService1.UseDefaultCredentials = false; myService1.Credentials = new NetworkCredential(GlobalVariable.getUserName(), GlobalVariable.getPassword()); myService1.DownloadStringCompleted += new DownloadStringCompletedEventHandler(ListsRequestCompleted); myService1.DownloadStringAsync(new Uri("https://twitter.com/" + GlobalVariable.getUserName() + "/lists.xml")); 3) Now let us look at implementing the event handler – ListRequestCompleted for this. public void ListsRequestCompleted(object sender, System.Net.DownloadStringCompletedEventArgs e) { if (e.Error != null) { StatusMessage.Text = "This application must be installed first."; parseXML(""); } else { //MessageBox.Show(e.Result.ToString()); parseXMLLists(e.Result.ToString()); } } 4) Now let us look at the parseXMLLists in detail xdoc = XDocument.Parse(text); var answer = (from status in xdoc.Descendants("list") select status.Element("name").Value); foreach (var p in answer) { Border bord = new Border(); bord.CornerRadius = new CornerRadius(10, 10, 10, 10); Button b = new Button(); b.MinWidth = 70; b.Background = new SolidColorBrush(Colors.Black); b.Foreground = new SolidColorBrush(Colors.Black); //b.Width = 70; b.Height = 25; b.Click += new RoutedEventHandler(b_Click); b.Content = p.ToString(); bord.Child = b; TwitterListStack.Children.Add(bord); } So here what am I doing, I am just dynamically creating a button for each of the lists and put them within a StackPanel and for each of these buttons, I am creating a event handler b_Click which will be fired on button click. We will look into this method in detail soon. For now let us get the buttons displayed. 5) Now the user might have some lists to which he has subscribed to. We need to create a button for these as well. In the end of TwitterCredentialsSubmit method, we need to make a call to http://api.twitter.com/1/<TwitterUsername>/lists/subscriptions.xml. Reference is available here. The code will look like this below. myService2.AllowReadStreamBuffering = true; myService2.UseDefaultCredentials = false; myService2.Credentials = new NetworkCredential(GlobalVariable.getUserName(), GlobalVariable.getPassword()); myService2.DownloadStringCompleted += new DownloadStringCompletedEventHandler(ListsSubsRequestCompleted); myService2.DownloadStringAsync(new Uri("http://api.twitter.com/1/" + GlobalVariable.getUserName() + "/lists/subscriptions.xml")); 6) In the event handler – ListsSubsRequestCompleted, we need to parse through the xml string and create a button for each of the lists subscribed, let us see how. I’ve taken only the “full_name”, you can choose what you want, refer the documentation here. Note the point that the full_name will have @<UserName>/<ListName> format – this will be useful very soon. xdoc = XDocument.Parse(text); var answer = (from status in xdoc.Descendants("list") select status.Element("full_name").Value); foreach (var p in answer) { Border bord = new Border(); bord.CornerRadius = new CornerRadius(10, 10, 10, 10); Button b = new Button(); b.Background = new SolidColorBrush(Colors.Black); b.Foreground = new SolidColorBrush(Colors.Black); //b.Width = 70; b.MinWidth = 70; b.Height = 25; b.Click += new RoutedEventHandler(b_Click); b.Content = p.ToString(); bord.Child = b; TwitterListStack.Children.Add(bord); } Please note, I am setting the button width to be auto based on the content and also giving it a midwidth value. I wanted to create a rounded corner buttons, but for some reason its not working. Also add this StackPanel – TwitterListStack of the Home.xaml <StackPanel HorizontalAlignment="Center" Orientation="Horizontal" Name="TwitterListStack"></StackPanel> After doing this, you would get a series of buttons in the top of the home page. 7) Now the button click event handler – b_Click, in this method, once the button is clicked, I call another method with the content string of the button which is clicked as the parameter. Button b = (Button)e.OriginalSource; getListStatuses(b.Content.ToString()); 8) Now the getListsStatuses method: toggleProgressBar(true); WebRequest.RegisterPrefix("http://", System.Net.Browser.WebRequestCreator.ClientHttp); WebClient myService = new WebClient(); myService.AllowReadStreamBuffering = true; myService.UseDefaultCredentials = false; myService.DownloadStringCompleted += new DownloadStringCompletedEventHandler(TimelineRequestCompleted); if (listName.IndexOf("@") > -1 && listName.IndexOf("/") > -1) { string[] arrays = null; arrays = listName.Split('/'); arrays[0] = arrays[0].Replace("@", " ").Trim(); //MessageBox.Show(arrays[0]); //MessageBox.Show(arrays[1]); string url = "http://api.twitter.com/1/" + arrays[0] + "/lists/" + arrays[1] + "/statuses.xml"; //MessageBox.Show(url); myService.DownloadStringAsync(new Uri(url)); } else myService.DownloadStringAsync(new Uri("http://api.twitter.com/1/" + GlobalVariable.getUserName() + "/lists/" + listName + "/statuses.xml")); Please note that the url to look at will be different based on the list clicked – if its user created, the url format will be http://api.twitter.com/1/<CurentUser>/lists/<ListName>/statuses.xml But if it is some lists subscribed, it will be http://api.twitter.com/1/<ListOwnerUserName>/lists/<ListName>/statuses.xml The first one is pretty straight forward to implement, but if its a list subscribed, we need to split the listName string to get the list owner and list name and user them to form the string. So that is what I’ve done in this method, if the listName has got “@” and “/” I build the url differently. 9) Until now, we’ve been using only a few nodes of the status message xml string, now we will look to fetch a new field - “profile_image_url”. Images in datagrid – COOL. So for that, we need to modify our Status.cs file to include two more fields one string another BitmapImage with get and set. public string profile_image_url { get; set; } public BitmapImage profileImage { get; set; } 10) Now let us change the generic parseXML method which is used for binding to the datagrid. public void parseXML(string text) { XDocument xdoc; xdoc = XDocument.Parse(text); statusList = new List<Status>(); statusList = (from status in xdoc.Descendants("status") select new Status { ID = status.Element("id").Value, Text = status.Element("text").Value, Source = status.Element("source").Value, UserID = status.Element("user").Element("id").Value, UserName = status.Element("user").Element("screen_name").Value, profile_image_url = status.Element("user").Element("profile_image_url").Value, profileImage = new BitmapImage(new Uri(status.Element("user").Element("profile_image_url").Value)) }).ToList(); DataGridStatus.ItemsSource = statusList; StatusMessage.Text = "Datagrid refreshed."; toggleProgressBar(false); } We are here creating a new bitmap image from the image url and creating a new Status object for every status and binding them to the data grid. Refer to the Twitter API documentation here. You can choose any column you want. 11) Until now, we’ve been using the auto generate columns for the data grid, but if you want it to be really cool, you need to define the columns with templates, etc… <data:DataGrid AutoGenerateColumns="False" Name="DataGridStatus" Height="Auto" MinWidth="400"> <data:DataGrid.Columns> <data:DataGridTemplateColumn Width="50" Header=""> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <Image Source="{Binding profileImage}" Width="50" Height="50" Margin="1"/> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> <data:DataGridTextColumn Width="Auto" Header="User Name" Binding="{Binding UserName}" /> <data:DataGridTemplateColumn MinWidth="300" Width="Auto" Header="Status"> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <TextBlock TextWrapping="Wrap" Text="{Binding Text}"/> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> </data:DataGrid.Columns> </data:DataGrid> I’ve used only three columns – Profile image, Username, Status text. Now our Datagrid will look super cool like this. Coincidentally,  Tim Heuer is on the screenshot , who is a Silverlight Guru and works on SL team in Microsoft. His blog is really super. Here is the zipped file for all the xaml, xaml.cs & class files pages. Ok let us stop here for now, will look into implementing few more features in the next few posts and then I am going to look into developing a ASP.NET MVC 2 application. Hope you all liked this post. If you have any queries / suggestions feel free to comment below or contact me. Cheers! Technorati Tags: Silverlight,LINQ,Twitter API,Twitter,Silverlight 4

    Read the article

  • To display the field values submitted with AJAX [closed]

    - by work
    Here is the code:I want to post the field values entered in this code to the page ajaxpost.php using Ajax and then do some operations there. What would be code required to be written in ajaxpost.php <html> <head> <script type="text/javascript"> function loadXMLDoc() { var xmlhttp; if (window.XMLHttpRequest) {// code for IE7+, Firefox, Chrome, Opera, Safari xmlhttp=new XMLHttpRequest(); } else {// code for IE6, IE5 xmlhttp=new ActiveXObject("Microsoft.XMLHTTP"); } xmlhttp.onreadystatechange=function() { if (xmlhttp.readyState==4 && xmlhttp.status==200) { document.getElementById("myDiv").innerHTML=xmlhttp.responseText; } } var zz=document.f1.dd.value; //alert(zz); var qq= document.f1.cc.value; xmlhttp.open("POST","ajaxpost.php",true); xmlhttp.setRequestHeader("Content-type","application/x-www-form-urlencoded"); xmlhttp.send("dd=zz&cc=qq"); } </script> </head> <body> <h2>AJAX</h2> <form name="f1"> <input type="text" name="dd"> <input type="text" name="cc"> <button type="button" onclick="loadXMLDoc()">Request data</button> <div id="myDiv"></div> </form> </body> </html>

    Read the article

  • Access Officejet Pro L7590 memory card reader

    - by luri
    I can't manage to access my printer's memory card reader in Nautilus. I can just access it with hp-unload. Here's a sample output from this command: lubuntu@L-X6:~$ hp-unload hp:/net/Officejet_Pro_L7500?zc=HP065193 HP Linux Imaging and Printing System (ver. 3.10.6) Photo Card Access Utility ver. 3.3 Copyright (c) 2001-9 Hewlett-Packard Development Company, LP This software comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to distribute it under certain conditions. See COPYING file for more details. Using device: hp:/net/Officejet_Pro_L7500?zc=HP065193 |error: Photo card write failed (Card may be write protected) / Photocard on device hp:/net/Officejet_Pro_L7500?zc=HP065193 mounted DO NOT REMOVE PHOTO CARD UNTIL YOU EXIT THIS PROGRAM warning: Photo card is write protected. Type 'help' for a list of commands. Type 'exit' to quit. pcard: / > ls \ Name Size Type dcim/ directory eos_digi.tal 0 B unknown/unknown 1 files, 0 B pcard: / > cd dcim |pcard: /dcim > ls | Name Size Type . directory .. directory 100eos5d/ directory 267canon/ directory 270canon/ directory 271canon/ directory 272canon/ directory 0 files, 0 B pcard: /dcim > cd 272canon -pcard: /dcim/272canon > ls \ Name Size Type . directory .. directory _mg_7201.jpg 3.1 MB image/jpeg ...........(some more files)................. _mg_7281.jpg 2.5 MB image/jpeg _mg_7282.jpg 2.5 MB image/jpeg 82 files, 241.6 MB (253377883) How can I acess it from nautilus or mount it as a filesystem? Note that this is similar to this other question: Can't get HP Officejet 6500 card reader to work. but actually there seemed to be no supported device here, while in my case I manage to access the memory card from hp-unload.

    Read the article

  • What is the supposed productivity gain of dynamic typing?

    - by hstoerr
    I often heard the claim that dynamically typed languages are more productive than statically typed languages. What are the reasons for this claim? Isn't it just tooling with modern concepts like convention over configuration, the use of functional programming, advanced programming models and use of consistent abstractions? Admittedly there is less clutter because the (for instance in Java) often redundant type declarations are not needed, but you can also omit most type declarations in statically typed languages that usw type inference, without loosing the other advantages of static typing. And all of this is available for modern statically typed languages like Scala as well. So: what is there to say for productivity with dynamic typing that really is an advantage of the type model itself?

    Read the article

  • Default Parameters vs Method Overloading

    - by João Angelo
    With default parameters introduced in C# 4.0 one might be tempted to abandon the old approach of providing method overloads to simulate default parameters. However, you must take in consideration that both techniques are not interchangeable since they show different behaviors in certain scenarios. For me the most relevant difference is that default parameters are a compile time feature while method overloading is a runtime feature. To illustrate these concepts let’s take a look at a complete, although a bit long, example. What you need to retain from the example is that static method Foo uses method overloading while static method Bar uses C# 4.0 default parameters. static void CreateCallerAssembly(string name) { // Caller class - Invokes Example.Foo() and Example.Bar() string callerCode = String.Concat( "using System;", "public class Caller", "{", " public void Print()", " {", " Console.WriteLine(Example.Foo());", " Console.WriteLine(Example.Bar());", " }", "}"); var parameters = new CompilerParameters(new[] { "system.dll", "Common.dll" }, name); new CSharpCodeProvider().CompileAssemblyFromSource(parameters, callerCode); } static void Main() { // Example class - Foo uses overloading while Bar uses C# 4.0 default parameters string exampleCode = String.Concat( "using System;", "public class Example", "{{", " public static string Foo() {{ return Foo(\"{0}\"); }}", " public static string Foo(string key) {{ return \"FOO-\" + key; }}", " public static string Bar(string key = \"{0}\") {{ return \"BAR-\" + key; }}", "}}"); var compiler = new CSharpCodeProvider(); var parameters = new CompilerParameters(new[] { "system.dll" }, "Common.dll"); // Build Common.dll with default value of "V1" compiler.CompileAssemblyFromSource(parameters, String.Format(exampleCode, "V1")); // Caller1 built against Common.dll that uses a default of "V1" CreateCallerAssembly("Caller1.dll"); // Rebuild Common.dll with default value of "V2" compiler.CompileAssemblyFromSource(parameters, String.Format(exampleCode, "V2")); // Caller2 built against Common.dll that uses a default of "V2" CreateCallerAssembly("Caller2.dll"); dynamic caller1 = Assembly.LoadFrom("Caller1.dll").CreateInstance("Caller"); dynamic caller2 = Assembly.LoadFrom("Caller2.dll").CreateInstance("Caller"); Console.WriteLine("Caller1.dll:"); caller1.Print(); Console.WriteLine("Caller2.dll:"); caller2.Print(); } And if you run this code you will get the following output: // Caller1.dll: // FOO-V2 // BAR-V1 // Caller2.dll: // FOO-V2 // BAR-V2 You see that even though Caller1.dll runs against the current Common.dll assembly where method Bar defines a default value of “V2″ the output show us the default value defined at the time Caller1.dll compiled against the first version of Common.dll. This happens because the compiler will copy the current default value to each method call, much in the same way a constant value (const keyword) is copied to a calling assembly and changes to it’s value will only be reflected if you rebuild the calling assembly again. The use of default parameters is also discouraged by Microsoft in public API’s as stated in (CA1026: Default parameters should not be used) code analysis rule.

    Read the article

  • Syntax logic suggestions

    - by Anna
    This syntax will be used inside HTML attributes. Here are a few examples of what I have so far: <input name="a" conditions="!b, c" /> <input name="b" /> <input name="c" /> This will make input "a" do something if b is not checked and c is checked (b and c are assumed to be checkboxes if they don't have a :value defined) <input name="a" conditions="!b:foo|bar, c:foo" /> <input name="b" /> <input name="c" /> This will make input "a" do something if bdoesn't have foo or bar values, and if c has the foo value. <input name="a" conditions="!b:EMPTY" /> <input name="b" /> Makes input "a" do something if b has a value assigned. So, essentially , acts as logical AND, : as equals (=), ! as NOT, and | as OR. The | (OR) is only needed between values (at least I think so), and AND is not needed between values for obvious reasons :) EMPTY means empty value, like <input value="" /> Do you have any suggestions on improving this syntax, like making it more human friendly? For example I think the "EMPTY" keyword is not really appropriate and should be replaced with a character, but I don't know which one to choose.

    Read the article

  • Send raw data to USB parallel port after upgrading to 11.10

    - by zaphod
    I have a laser cutter connected via a generic USB to parallel adapter. The laser cutter speaks HPGL, as it happens, but since this is a laser cutter and not a plotter, I usually want to generate the HPGL myself, since I care about the ordering, speed, and direction of cuts and so on. In previous versions of Ubuntu, I was able to print to the cutter by copying an HPGL file directly to the corresponding USB "lp" device. For example: cp foo.plt /dev/usblp1 Well, I just upgraded to Ubuntu 11.10 oneiric, and I can't find any "lp" devices in /dev anymore. D'oh! What's the preferred way to send raw data to a parallel port in Ubuntu? I've tried System Settings Printing + Add, hoping that I might be able to associate my device with some kind of "raw printer" driver and print to it with a command like lp -d LaserCutter foo.plt But my USB to parallel adapter doesn't seem to show up in the list. What I do see are my HP Color LaserJet, two USB-to-serial adapters, "Enter URI", and "Network Printer". Meanwhile, over in /dev, I do see /dev/ttyUSB0 and /dev/ttyUSB1 devices for the 2 USB-to-serial adapters. I don't see anything obvious corresponding to the HP printer (which was /dev/usblp0 prior to the upgrade), except for generic USB stuff. For example, sudo find /dev | grep lp produces no output. I do seem to be able to print to the HP printer just fine, though. The printer setup GUI gives it a device URI starting with "hp:" which isn't much help for the parallel adapter. The CUPS administrator's guide makes it sound like I might need to feed it a device URI of the form parallel:/dev/SOMETHING, but of course if I had a /dev/SOMETHING I'd probably just go on writing to it directly. Here's what dmesg says after I disconnect and reconnect the device from the USB port: [ 924.722906] usb 1-1.1.4: USB disconnect, device number 7 [ 959.993002] usb 1-1.1.4: new full speed USB device number 8 using ehci_hcd And here's how it shows up in lsusb -v: Bus 001 Device 008: ID 1a86:7584 QinHeng Electronics CH340S Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 8 idVendor 0x1a86 QinHeng Electronics idProduct 0x7584 CH340S bcdDevice 2.52 iManufacturer 0 iProduct 2 USB2.0-Print iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0x80 (Bus Powered) MaxPower 96mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 7 Printer bInterfaceSubClass 1 Printer bInterfaceProtocol 2 Bidirectional iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0020 1x 32 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x02 EP 2 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0020 1x 32 bytes bInterval 0 Device Status: 0x0000 (Bus Powered)

    Read the article

  • Changes to the LINQ-to-StreamInsight Dialect

    - by Roman Schindlauer
    In previous versions of StreamInsight (1.0 through 2.0), CepStream<> represents temporal streams of many varieties: Streams with ‘open’ inputs (e.g., those defined and composed over CepStream<T>.Create(string streamName) Streams with ‘partially bound’ inputs (e.g., those defined and composed over CepStream<T>.Create(Type adapterFactory, …)) Streams with fully bound inputs (e.g., those defined and composed over To*Stream – sequences or DQC) The stream may be embedded (where Server.Create is used) The stream may be remote (where Server.Connect is used) When adding support for new programming primitives in StreamInsight 2.1, we faced a choice: Add a fourth variety (use CepStream<> to represent streams that are bound the new programming model constructs), or introduce a separate type that represents temporal streams in the new user model. We opted for the latter. Introducing a new type has the effect of reducing the number of (confusing) runtime failures due to inappropriate uses of CepStream<> instances in the incorrect context. The new types are: IStreamable<>, which logically represents a temporal stream. IQStreamable<> : IStreamable<>, which represents a queryable temporal stream. Its relationship to IStreamable<> is analogous to the relationship of IQueryable<> to IEnumerable<>. The developer can compose temporal queries over remote stream sources using this type. The syntax of temporal queries composed over IQStreamable<> is mostly consistent with the syntax of our existing CepStream<>-based LINQ provider. However, we have taken the opportunity to refine certain aspects of the language surface. Differences are outlined below. Because 2.1 introduces new types to represent temporal queries, the changes outlined in this post do no impact existing StreamInsight applications using the existing types! SelectMany StreamInsight does not support the SelectMany operator in its usual form (which is analogous to SQL’s “CROSS APPLY” operator): static IEnumerable<R> SelectMany<T, R>(this IEnumerable<T> source, Func<T, IEnumerable<R>> collectionSelector) It instead uses SelectMany as a convenient syntactic representation of an inner join. The parameter to the selector function is thus unavailable. Because the parameter isn’t supported, its type in StreamInsight 1.0 – 2.0 wasn’t carefully scrutinized. Unfortunately, the type chosen for the parameter is nonsensical to LINQ programmers: static CepStream<R> SelectMany<T, R>(this CepStream<T> source, Expression<Func<CepStream<T>, CepStream<R>>> streamSelector) Using Unit as the type for the parameter accurately reflects the StreamInsight’s capabilities: static IQStreamable<R> SelectMany<T, R>(this IQStreamable<T> source, Expression<Func<Unit, IQStreamable<R>>> streamSelector) For queries that succeed – that is, queries that do not reference the stream selector parameter – there is no difference between the code written for the two overloads: from x in xs from y in ys select f(x, y) Top-K The Take operator used in StreamInsight causes confusion for LINQ programmers because it is applied to the (unbounded) stream rather than the (bounded) window, suggesting that the query as a whole will return k rows: (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) The use of SelectMany is also unfortunate in this context because it implies the availability of the window parameter within the remainder of the comprehension. The following compiles but fails at runtime: (from win in xs.SnapshotWindow() from x in win orderby x.A select win).Take(k) The Take operator in 2.1 is applied to the window rather than the stream: Before After (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) from win in xs.SnapshotWindow() from b in     (from x in win     orderby x.A     select x.B).Take(k) select b Multicast We are introducing an explicit multicast operator in order to preserve expression identity, which is important given the semantics about moving code to and from StreamInsight. This also better matches existing LINQ dialects, such as Reactive. This pattern enables expressing multicasting in two ways: Implicit Explicit var ys = from x in xs          where x.A > 1          select x; var zs = from y1 in ys          from y2 in ys.ShiftEventTime(_ => TimeSpan.FromSeconds(1))          select y1 + y2; var ys = from x in xs          where x.A > 1          select x; var zs = ys.Multicast(ys1 =>     from y1 in ys1     from y2 in ys1.ShiftEventTime(_ => TimeSpan.FromSeconds(1))     select y1 + y2; Notice the product translates an expression using implicit multicast into an expression using the explicit multicast operator. The user does not see this translation. Default window policies Only default window policies are supported in the new surface. Other policies can be simulated by using AlterEventLifetime. Before After xs.SnapshotWindow(     WindowInputPolicy.ClipToWindow,     SnapshotWindowInputPolicy.Clip) xs.SnapshotWindow() xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.PointAlignToWindowEnd) xs.TumblingWindow(     TimeSpan.FromSeconds(1)) xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.ClipToWindowEnd) Not supported … LeftAntiJoin Representation of LASJ as a correlated sub-query in the LINQ surface is problematic as the StreamInsight engine does not support correlated sub-queries (see discussion of SelectMany). The current syntax requires the introduction of an otherwise unsupported ‘IsEmpty()’ operator. As a result, the pattern is not discoverable and implies capabilities not present in the server. The direct representation of LASJ is used instead: Before After from x in xs where     (from y in ys     where x.A > y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, (x, y) => x.A > y.B) from x in xs where     (from y in ys     where x.A == y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, x => x.A, y => y.B) ApplyWithUnion The ApplyWithUnion methods have been deprecated since their signatures are redundant given the standard SelectMany overloads: Before After xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count()) xs.GroupBy(x => x.A).SelectMany(     gs =>     from win in gs.SnapshotWindow()     select win.Count()) xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count(), r => new { r.Key, Count = r.Payload }) from x in xs group x by x.A into gs from win in gs.SnapshotWindow() select new { gs.Key, Count = win.Count() } Alternate UDO syntax The representation of UDOs in the StreamInsight LINQ dialect confuses cardinalities. Based on the semantics of user-defined operators in StreamInsight, one would expect to construct queries in the following form: from win in xs.SnapshotWindow() from y in MyUdo(win) select y Instead, the UDO proxy method is referenced within a projection, and the (many) results returned by the user code are automatically flattened into a stream: from win in xs.SnapshotWindow() select MyUdo(win) The “many-or-one” confusion is exemplified by the following example that compiles but fails at runtime: from win in xs.SnapshotWindow() select MyUdo(win) + win.Count() The above query must fail because the UDO is in fact returning many values per window while the count aggregate is returning one. Original syntax New alternate syntax from win in xs.SnapshotWindow() select win.UdoProxy(1) from win in xs.SnapshotWindow() from y in win.UserDefinedOperator(() => new Udo(1)) select y -or- from win in xs.SnapshotWindow() from y in win.UdoMacro(1) select y Notice that this formulation also sidesteps the dynamic type pitfalls of the existing “proxy method” approach to UDOs, in which the type of the UDO implementation (TInput, TOuput) and the type of its constructor arguments (TConfig) need to align in a precise and non-obvious way with the argument and return types for the corresponding proxy method. UDSO syntax UDSO currently leverages the DataContractSerializer to clone initial state for logical instances of the user operator. Initial state will instead be described by an expression in the new LINQ surface. Before After xs.Scan(new Udso()) xs.Scan(() => new Udso()) Name changes ShiftEventTime => AlterEventStartTime: The alter event lifetime overload taking a new start time value has been renamed. CountByStartTimeWindow => CountWindow

    Read the article

  • C#/.NET Little Wonders: Comparer&lt;T&gt;.Default

    - by James Michael Hare
    I’ve been working with a wonderful team on a major release where I work, which has had the side-effect of occupying most of my spare time preparing, testing, and monitoring.  However, I do have this Little Wonder tidbit to offer today. Introduction The IComparable<T> interface is great for implementing a natural order for a data type.  It’s a very simple interface with a single method: 1: public interface IComparer<in T> 2: { 3: // Compare two instances of same type. 4: int Compare(T x, T y); 5: }  So what do we expect for the integer return value?  It’s a pseudo-relative measure of the ordering of x and y, which returns an integer value in much the same way C++ returns an integer result from the strcmp() c-style string comparison function: If x == y, returns 0. If x > y, returns > 0 (often +1, but not guaranteed) If x < y, returns < 0 (often –1, but not guaranteed) Notice that the comparison operator used to evaluate against zero should be the same comparison operator you’d use as the comparison operator between x and y.  That is, if you want to see if x > y you’d see if the result > 0. The Problem: Comparing With null Can Be Messy This gets tricky though when you have null arguments.  According to the MSDN, a null value should be considered equal to a null value, and a null value should be less than a non-null value.  So taking this into account we’d expect this instead: If x == y (or both null), return 0. If x > y (or y only is null), return > 0. If x < y (or x only is null), return < 0. But here’s the problem – if x is null, what happens when we attempt to call CompareTo() off of x? 1: // what happens if x is null? 2: x.CompareTo(y); It’s pretty obvious we’ll get a NullReferenceException here.  Now, we could guard against this before calling CompareTo(): 1: int result; 2:  3: // first check to see if lhs is null. 4: if (x == null) 5: { 6: // if lhs null, check rhs to decide on return value. 7: if (y == null) 8: { 9: result = 0; 10: } 11: else 12: { 13: result = -1; 14: } 15: } 16: else 17: { 18: // CompareTo() should handle a null y correctly and return > 0 if so. 19: result = x.CompareTo(y); 20: } Of course, we could shorten this with the ternary operator (?:), but even then it’s ugly repetitive code: 1: int result = (x == null) 2: ? ((y == null) ? 0 : -1) 3: : x.CompareTo(y); Fortunately, the null issues can be cleaned up by drafting in an external Comparer.  The Soltuion: Comparer<T>.Default You can always develop your own instance of IComparer<T> for the job of comparing two items of the same type.  The nice thing about a IComparer is its is independent of the things you are comparing, so this makes it great for comparing in an alternative order to the natural order of items, or when one or both of the items may be null. 1: public class NullableIntComparer : IComparer<int?> 2: { 3: public int Compare(int? x, int? y) 4: { 5: return (x == null) 6: ? ((y == null) ? 0 : -1) 7: : x.Value.CompareTo(y); 8: } 9: }  Now, if you want a custom sort -- especially on large-grained objects with different possible sort fields -- this is the best option you have.  But if you just want to take advantage of the natural ordering of the type, there is an easier way.  If the type you want to compare already implements IComparable<T> or if the type is System.Nullable<T> where T implements IComparable, there is a class in the System.Collections.Generic namespace called Comparer<T> which exposes a property called Default that will create a singleton that represents the default comparer for items of that type.  For example: 1: // compares integers 2: var intComparer = Comparer<int>.Default; 3:  4: // compares DateTime values 5: var dateTimeComparer = Comparer<DateTime>.Default; 6:  7: // compares nullable doubles using the null rules! 8: var nullableDoubleComparer = Comparer<double?>.Default;  This helps you avoid having to remember the messy null logic and makes it to compare objects where you don’t know if one or more of the values is null. This works especially well when creating say an IComparer<T> implementation for a large-grained class that may or may not contain a field.  For example, let’s say you want to create a sorting comparer for a stock open price, but if the market the stock is trading in hasn’t opened yet, the open price will be null.  We could handle this (assuming a reasonable Quote definition) like: 1: public class Quote 2: { 3: // the opening price of the symbol quoted 4: public double? Open { get; set; } 5:  6: // ticker symbol 7: public string Symbol { get; set; } 8:  9: // etc. 10: } 11:  12: public class OpenPriceQuoteComparer : IComparer<Quote> 13: { 14: // Compares two quotes by opening price 15: public int Compare(Quote x, Quote y) 16: { 17: return Comparer<double?>.Default.Compare(x.Open, y.Open); 18: } 19: } Summary Defining a custom comparer is often needed for non-natural ordering or defining alternative orderings, but when you just want to compare two items that are IComparable<T> and account for null behavior, you can use the Comparer<T>.Default comparer generator and you’ll never have to worry about correct null value sorting again.     Technorati Tags: C#,.NET,Little Wonders,BlackRabbitCoder,IComparable,Comparer

    Read the article

  • Are nullable types preferable to magic numbers?

    - by Matt H
    I have been having a little bit of a debate with a coworker lately. We are specifically using C#, but this could apply to any language with nullable types. Say for example you have a value that represents a maximum. However, this maximum value is optional. I argue that a nullable number would be preferable. My coworker favors the use of zero, citing precedent. Granted, things like network sockets have often used zero to represent an unlimited timeout. If I were to write code dealing with sockets today, I would personally use a nullable value, since I feel it would better represent the fact that there is NO timeout. Which representation is better? Both require a condition checking for the value meaning "none", but I believe that a nullable type conveys the intent a little bit better.

    Read the article

  • fd partitions gone from 2 discs, md happy with it and resyncs. How to recover ?

    - by d0nd
    Hey gurus, need some help badly with this one. I run a server with a 6Tb md raid5 volume built over 7*1Tb disks. I've had to shut down the server lately and when it went back up, 2 out of the 7 disks used for the raid volume had lost its conf : dmesg : [ 10.184167] sda: sda1 sda2 sda3 // System disk [ 10.202072] sdb: sdb1 [ 10.210073] sdc: sdc1 [ 10.222073] sdd: sdd1 [ 10.229330] sde: sde1 [ 10.239449] sdf: sdf1 [ 11.099896] sdg: unknown partition table [ 11.255641] sdh: unknown partition table All 7 disks have same geometry and were configured alike : dmesg : Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x1e7481a5 Device Boot Start End Blocks Id System /dev/sdb1 1 121601 976760001 fd Linux raid autodetect All 7 disks (sdb1, sdc1, sdd1, sde1, sdf1, sdg1, sdh1) were used in a md raid5 xfs volume. When booting, md, which was (obviously) out of sync kicked in and automatically started rebuilding over the 7 disks, including the two "faulty" ones; xfs tried to do some shenanigans as well: dmesg : [ 19.566941] md: md0 stopped. [ 19.817038] md: bind<sdc1> [ 19.817339] md: bind<sdd1> [ 19.817465] md: bind<sde1> [ 19.817739] md: bind<sdf1> [ 19.817917] md: bind<sdh> [ 19.818079] md: bind<sdg> [ 19.818198] md: bind<sdb1> [ 19.818248] md: md0: raid array is not clean -- starting background reconstruction [ 19.825259] raid5: device sdb1 operational as raid disk 0 [ 19.825261] raid5: device sdg operational as raid disk 6 [ 19.825262] raid5: device sdh operational as raid disk 5 [ 19.825264] raid5: device sdf1 operational as raid disk 4 [ 19.825265] raid5: device sde1 operational as raid disk 3 [ 19.825267] raid5: device sdd1 operational as raid disk 2 [ 19.825268] raid5: device sdc1 operational as raid disk 1 [ 19.825665] raid5: allocated 7334kB for md0 [ 19.825667] raid5: raid level 5 set md0 active with 7 out of 7 devices, algorithm 2 [ 19.825669] RAID5 conf printout: [ 19.825670] --- rd:7 wd:7 [ 19.825671] disk 0, o:1, dev:sdb1 [ 19.825672] disk 1, o:1, dev:sdc1 [ 19.825673] disk 2, o:1, dev:sdd1 [ 19.825675] disk 3, o:1, dev:sde1 [ 19.825676] disk 4, o:1, dev:sdf1 [ 19.825677] disk 5, o:1, dev:sdh [ 19.825679] disk 6, o:1, dev:sdg [ 19.899787] PM: Starting manual resume from disk [ 28.663228] Filesystem "md0": Disabling barriers, not supported by the underlying device [ 28.663228] XFS mounting filesystem md0 [ 28.884433] md: resync of RAID array md0 [ 28.884433] md: minimum _guaranteed_ speed: 1000 KB/sec/disk. [ 28.884433] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync. [ 28.884433] md: using 128k window, over a total of 976759936 blocks. [ 29.025980] Starting XFS recovery on filesystem: md0 (logdev: internal) [ 32.680486] XFS: xlog_recover_process_data: bad clientid [ 32.680495] XFS: log mount/recovery failed: error 5 [ 32.682773] XFS: log mount failed I ran fdisk and flagged sdg1 and sdh1 as fd. I tried to reassemble the array but it didnt work: no matter what was in mdadm.conf, it still uses sdg and sdh instead of sdg1 and sdh1. I checked in /dev and I see no sdg1 and and sdh1, shich explains why it wont use it. I just don't know why those partitions are gone from /dev and how to readd those... blkid : /dev/sda1: LABEL="boot" UUID="519790ae-32fe-4c15-a7f6-f1bea8139409" TYPE="ext2" /dev/sda2: TYPE="swap" /dev/sda3: LABEL="root" UUID="91390d23-ed31-4af0-917e-e599457f6155" TYPE="ext3" /dev/sdb1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdc1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdd1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sde1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdf1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdg: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdh: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" fdisk -l : Disk /dev/sda: 40.0 GB, 40020664320 bytes 255 heads, 63 sectors/track, 4865 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x8c878c87 Device Boot Start End Blocks Id System /dev/sda1 * 1 12 96358+ 83 Linux /dev/sda2 13 134 979965 82 Linux swap / Solaris /dev/sda3 135 4865 38001757+ 83 Linux Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x1e7481a5 Device Boot Start End Blocks Id System /dev/sdb1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xc9bdc1e9 Device Boot Start End Blocks Id System /dev/sdc1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xcc356c30 Device Boot Start End Blocks Id System /dev/sdd1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sde: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xe87f7a3d Device Boot Start End Blocks Id System /dev/sde1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xb17a2d22 Device Boot Start End Blocks Id System /dev/sdf1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdg: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x8f3bce61 Device Boot Start End Blocks Id System /dev/sdg1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdh: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xa98062ce Device Boot Start End Blocks Id System /dev/sdh1 1 121601 976760001 fd Linux raid autodetect I really dont know what happened nor how to recover from this mess. Needless to say the 5TB or so worth of data sitting on those disks are very valuable to me... Any idea any one? Did anybody ever experienced a similar situation or know how to recover from it ? Can someone help me? I'm really desperate... :x

    Read the article

  • Setting up a Carousel Component in ADF Mobile

    - by Shay Shmeltzer
    The Carousel component is one of the slickier ways of showing collections of data, and on a mobile device it works really great with the finger swipe gesture. Using the Carousel component in ADF Mobile is similar to using it in regular web ADF applications, with one major change - right now you can't drag a collection from the data control palette and drop it as a carousel. So here is a quick work around for that, and details about setting up carousels in your application. First thing you'll need is a data control that returns an array of records. In my demo I'm using the Emps collection that you can get from following this tutorial. Then you drag the emps and drop it in your amx page as an ADF mobile iterator. We are doing this as a short cut to getting the right binding needed for a carousel in our page. If you look now in your page's binding you'll see something like this: You can now remark the whole iterator code in your page's source. Next let's add the carousel From the component palette drag the carousel (from the data view category) to the page. Next drag a carousel item and drop it in the nodestamp facet of the carousel. Now we'll hook up the carousel to the binding we got from the iterator - this is quite simple just copy the var and value attributes from the iterator tag to the carousel tag: var="row" value="#{bindings.emps.collectionModel}" Next drop a panelForm, or another layout panel in to the carousel item. Into that panelForm you can now drop items and bind their value property to row.attributeNames - basically copying the way it is in the fields in the iterator for example: value="#{row.hireDate}". By the way you can also copy other attributes like the label. And that's it. Your code should end up looking something like this:     <amx:carousel id="c1" var="row" value="#{bindings.emps.collectionModel}">      <amx:facet name="nodeStamp">        <amx:carouselItem id="ci1">          <amx:panelFormLayout id="pfl1">            <amx:inputText label="#{bindings.emps.hints.salary.label}" value="#{row.salary}" id="it1"/>            <amx:inputText label="#{bindings.emps.hints.name.label}" value="#{row.name}" id="it2"/>          </amx:panelFormLayout>        </amx:carouselItem>      </amx:facet>    </amx:carousel> And when you run your application it will look like this:

    Read the article

  • How do I reset my Ubuntu 12.10 password?

    - by Salvador Yniguez
    So my sister gave me this old laptop that has Ubuntu 12.10. The problem is that she has a username administrator password, but she forgot it. I've tried using GRUB and launching recovery mode and using the root shell prompt. And I type the "passwd username" command, and it tells me to type the new UNIX password, but when I try to type a new password it's like my keyboard doesn't even work. It types nothing. What's the problem? Why does my keyboard not type anything when I try to reset the UNIX password? It always works perfectly fine. I'm grateful for any help, thank you.

    Read the article

  • Redirect input from one terminal to another

    - by Niki Yoshiuchi
    I have sshed into a linux box and I'm using dvtm and bash (although I have also tried this with Gnu screen and bash). I have two terminals, current /dev/pts/29 and /dev/pts/130. I want to redirect the input from one to the other. From what I understand, in /dev/pts/130 I can type: cat </dev/pts/29 And then when I type in /dev/pts/29 the characters I type should show up in /dev/pts/130. However what ends up happening is that every other character I type gets redirected. For example, if I type "hello" I get this: /dev/pts/29 | /dev/pts/130 $ | $ cat </dev/pts/29 $ el | hlo This is really frustrating as I need to do this in order to redirect the io of a process running in gdb (I've tried both run /dev/pts/# and set inferior-tty /dev/pts/# and both resulted in the aforementioned behavior). Am I doing something wrong, or is this a bug in bash/screen/dvtm?

    Read the article

  • Supply Chain Professionals: Get Connected, Stay Current

    - by Stephen Slade
    Each day, thousands of supply chain professionals like you face challenges to lower inventory, collaborate better with distant partners and stretch the value from depleting resources.  Meeting ever-changing customer demands, with products getting to market faster and lifecycles shortning, the challenges grow even faster.  How do we respond? It’s amazing how much material is available on-line for our supply chain community. Many want to stay informed and be connected with better information. One great way to stay current on rapidly changing markets and solutions is to subscribe to the Value Chain Transformation newsletter published quarterly by the content staff at Oracle. In this edition, there’s a few great articles on Cloud, OpenWorld, events and products with solid customer testimony to share with you, our supply chain community.  Below is the link to the newsletter and how to subscribe Sept ‘12 Value Chain Newsletter: http://www.oracle.com/us/corporate/newsletter/archive/value-chain-and-procurement-1559127.html Subscription information is located at the bottom of the newsletter.

    Read the article

  • XenApp 6.5 – How to create and set a Policy using PowerShell

    - by Waclaw Chrabaszcz
    Originally posted on: http://geekswithblogs.net/Wchrabaszcz/archive/2013/06/20/xenapp-6.5--how-to-create-and-set-a-policy.aspxHere is my homework Add-PSSnapin -name Citrix.Common.* -ErrorAction SilentlyContinueNew-Item LocalFarmGpo:\User\MyPolicycd LocalFarmGpo:\User\MyPolicy\Settings\ICA\SecuritySet-ItemProperty .\MinimumEncryptionLevel State EnabledSet-ItemProperty .\MinimumEncryptionLevel Value Bits128cd LocalFarmGpo:\User\MyPolicy\Filters\WorkerGroupNew-Item -Name "All Servers" -Value "All Servers"Set-ItemProperty LocalFarmGpo:\User\MyPolicy -Name Priority -Value 2  So cute …

    Read the article

  • What does it mean to treat data as an asset?

    What does it mean to treat data as an asset? When considering this concept, we must define what data is and how it can be considered an asset. Data can easily be defined as a collection of stored truths that are open to interpretation and manipulation.  Expanding on this definition, data can be viewed as a set of captured facts, measurements, and ideas used to make decisions. Furthermore, InvestorsWords.com defines asset as any item of economic value owned by an individual or corporation. Now let’s apply this definition of asset to our definition of data, and ask the following question. Can facts, measurements and ideas be items that are of economic value owned by an individual or corporation? The obvious answer is yes; data can be bought and sold like commodities or analyzed to make smarter business decisions.  We can look at the economic value of data in one of two ways. First, data can be sold as a commodity that can take the form of goods like eBooks, Training, Music, Movies, and so on. Customers are willing to pay to gain access to this data for their consumption. This directly implies that there is an economic value for data in the form of a commodity because customers see a value in obtaining it.  Secondly data can be used in making smarter business decisions that allow for companies to become more profitable and/or reduce their potential for risk in regards to how they operate.  In the past I have worked at companies where we had to analyze previous sales activities in conjunction with current activities to determine how the company was preforming for the quarter.  In addition trends can be formulated based on existing data that allow companies to forecast data so that they can make strategic business decisions based sound forecasted data. Companies that truly value their data are constantly trying to grow and upgrade their data and supporting applications because it is the life blood of a company. If we look at an eBook retailer for example, imagine if they lost all of their data. They would be in essence forced out of business because they would have nothing to sell. In turn, if we look at a company that was using data to facilitate better decision making processes and they lost all of their data then they could be losing potential revenue and/ or increasing the company’s losses by making important business decisions virtually in the dark compared to when they were made on solid data.

    Read the article

  • Agile Executives

    - by Robert May
    Over the years, I have experienced many different styles of software development. In the early days, most of the development was Waterfall development. In the last few years, I’ve become an advocate of Scrum. As I talked about last month, many people have misconceptions about what Scrum really is. The reason why we do Scrum at Veracity is because of the difference it makes in the life of the team doing Scrum. Software is for people, and happy motivated people will build better software. However, not all executives understand Scrum and how to get the information from development teams that use Scrum. I think that these executives need a support system for managing Agile teams. Historical Software Management When Henry Ford pioneered the assembly line, I doubt he realized the impact he’d have on Management through the ages. Historically, management was about managing the process of building things. The people were just cogs in that process. Like all cogs, they were replaceable. Unfortunately, most of the software industry followed this same style of management. Many of today’s senior managers learned how to manage companies before software was a significant influence on how the company did business. Software development is a very creative process, but too many managers have treated it like an assembly line. Idea’s go in, working software comes out, and we just have to figure out how to make sure that the ideas going in are perfect, then the software will be perfect. Lean Manufacturing In the manufacturing industry, Lean manufacturing has revolutionized Henry Ford’s assembly line. Derived from the Toyota process, Lean places emphasis on always providing value for the customer. Anything the customer wouldn’t be willing to pay for is wasteful. Agile is based on similar principles. We’re building software for people, and anything that isn’t useful to them doesn’t add value. Waterfall development would have teams build reams and reams of documentation about how the software should work. Agile development dispenses with this work because excessive documentation doesn’t add value. Instead, teams focus on building documentation only when it truly adds value to the customer. Many other Agile principals are similar. Playing Catch-up Just like in the manufacturing industry, many managers in the software industry have yet to understand the value of the principles of Lean and Agile. They think they can wrap the uncertainties of software development up in a nice little package and then just execute, usually followed by failure. They spend a great deal of time and money trying to exactly predict the future. That expenditure of time and money doesn’t add value to the customer. Managers that understand that Agile know that there is a better way. They will instead focus on the priorities of the near term in detail, and leave the future to take care of itself. They have very detailed two week plans with less detailed quarterly plans. These plans are guided by a general corporate strategy that doesn’t focus on the exact implementation details. These managers also think in smaller features rather than large functionality. This adds a great deal of value to customers, since the features that matter most are the ones that the team focuses on in the near term and then are able to deliver to the customers that are paying for them. Agile managers also realize that stale software is very costly. They know that keeping the technology in their software current is much less expensive and risky than large rewrites that occur infrequently and schedule time in each release for refactoring of the existing software. Agile Executives Even though Agile is a better way, I’ve still seen failures using the Agile process. While some of these failures can be attributed to the team, most of them are caused by managers, not the team. Managers fail to understand what Agile is, how it works, and how to get the information that they need to make good business decisions. I think this is a shame. I’m very pleased that Veracity understands this problem and is trying to do something about it. Veracity is a key sponsor of Agile Executives. In fact, Galen is this year’s acting president for Agile Executives. The purpose of Agile Executives is to help managers better manage Agile teams and see better success. Agile Executives is trying to build a community of executives that range from managers interested in Agile to managers that have successfully adopted Agile. Together, these managers can form a community of support and ideas that will help make Agile teams more successful. Helping Your Team You can help too! Talk with your manager and get them involved in Agile Executives. Help Veracity build the community. If your manager understands Agile better, he’ll understand how to help his teams, which will result in software that adds more value for customers. If you have any questions about how you can be involved, please let me know. Technorati Tags: Agile,Agile Executives

    Read the article

  • What are the quality metrics for RAM?

    - by Hi-Tech KitKat Android
    I have searched RAM and i found there are given some specification for the same capacity RAM, What are the difference and performance comparison between these? Like RAM1 General Brand Transcend Memory Type 2 GB (8 x 128 MB) DDR2 DIMM Memory Standard DDR2-800/PC-6400 Compatible Device PC Pins 240-pin Burst Length 4, 8 Buffered/Unbuffered Unbuffered Memory Memory Clock 400 MHz Technology DDR2 SDRAM Memory CAS Latency 4, 5, 6 RAM 2 General Brand Transcend Memory Type 2 GB (8 x 128 MB) DDR2 DIMM Memory Standard DDR2-667/PC2-5300 Compatible Device PC Pins 240-pin Burst Length 4, 8 Buffered/Unbuffered Unbuffered Memory Memory Clock 333 MHz Technology DDR2 SDRAM Memory CAS Latency 3, 4, 5 RAM3 General Brand Kingston Memory Type 2 GB (64 x 256 MB) 800 MHz DDR2 DIMM Compatible Device PC Pins 240-pin Error Check Non-ECC Buffered/Unbuffered Unbuffered Memory Memory Clock 200 MHz Technology DDR2 SDRAM Memory CAS Latency 6 What are the affect of the following Memory Type(given as 8 x 128 MB) Memory Clock (given in MHz) CAS Latency (given as 4,5,6) my Requirement is 2 GB DDR2 Type Desktop Please help

    Read the article

  • Large concurrent user performance issues for Apache + mod_jk + GlassFish v3.1 clusters

    - by user10035
    I am running a java ee 6 ear application on a GlassFish v3.1 ( 2 clusters with 2 instances each) load balanced by an Apache v2.2 with mod_jk - all on the same server (Windows Server 2003 R2, Intel Xeon CPU x5670 @2.93Ghz, 6GB RAM, 2 cpus). The web application is accessed by around ~100 users. When they all try to access it at the same time every morning ~8am, the response is very slow while trying to access the main jsf home page. Apart from that I have seen the CPU usage spike upto 99% by the httpd process during the day frequently and I start seeing errors in the mod_jk.log file. [Wed Jun 08 08:25:43 2011] [9380:8216] [info] ajp_process_callback::jk_ajp_common.c (1885): Writing to client aborted or client network problems [Wed Jun 08 08:25:43 2011] [9380:8216] [info] ajp_service::jk_ajp_common.c (2543): (myAppLocalInstance4) sending request to tomcat failed (unrecoverable), because of client write error (attempt=1) Any suggestions on how I can go about improving this? Apache configuration is mostly the default as shown below ServerRoot "C:/Program Files/Apache Software Foundation/Apache2.2" Listen 80 LoadModule actions_module modules/mod_actions.so LoadModule alias_module modules/mod_alias.so LoadModule asis_module modules/mod_asis.so LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule authn_default_module modules/mod_authn_default.so LoadModule authn_file_module modules/mod_authn_file.so LoadModule authz_default_module modules/mod_authz_default.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule cgi_module modules/mod_cgi.so LoadModule dir_module modules/mod_dir.so LoadModule env_module modules/mod_env.so LoadModule include_module modules/mod_include.so LoadModule isapi_module modules/mod_isapi.so LoadModule log_config_module modules/mod_log_config.so LoadModule mime_module modules/mod_mime.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule setenvif_module modules/mod_setenvif.so <IfModule !mpm_netware_module> <IfModule !mpm_winnt_module> User daemon Group daemon </IfModule> </IfModule> DocumentRoot "C:/Program Files/Apache Software Foundation/Apache2.2/htdocs" <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> <Directory "C:/Program Files/Apache Software Foundation/Apache2.2/htdocs"> Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> <IfModule dir_module> DirectoryIndex index.html </IfModule> <FilesMatch "^\.ht"> Order allow,deny Deny from all Satisfy All </FilesMatch> ErrorLog "logs/error.log" LogLevel warn <IfModule log_config_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common <IfModule logio_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> CustomLog "logs/access.log" common </IfModule> <IfModule alias_module> ScriptAlias /cgi-bin/ "C:/Program Files/Apache Software Foundation/Apache2.2/cgi-bin/" </IfModule> <Directory "C:/Program Files/Apache Software Foundation/Apache2.2/cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all </Directory> DefaultType text/plain <IfModule mime_module> TypesConfig conf/mime.types AddType application/x-compress .Z AddType application/x-gzip .gz .tgz </IfModule> Include conf/extra/httpd-mpm.conf <IfModule ssl_module> SSLRandomSeed startup builtin SSLRandomSeed connect builtin </IfModule> LoadModule jk_module modules/mod_jk.so JkWorkersFile conf/workers.properties JkLogFile logs/mod_jk.log JkLogLevel info JkLogStampFormat "[%a %b %d %H:%M:%S %Y] " JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories JkRequestLogFormat "%w %V %T" JkMount /myApp/* loadbalancerLocal JkMount /myAppRemote/* loadbalancerRemote JkMount /myApp loadbalancerLocal JkMount /myAppRemote loadbalancerRemote The workers.properties config file is: worker.list=loadbalancerLocal,loadbalancerRemote worker.myAppLocalInstance1.type=ajp13 worker.myAppLocalInstance1.host=localhost worker.myAppLocalInstance1.port=8109 worker.myAppLocalInstance1.lbfactor=1 worker.myAppLocalInstance1.socket_keepalive=1 worker.myAppLocalInstance1.socket_timeout=1000 worker.myAppLocalInstance2.type=ajp13 worker.myAppLocalInstance2.host=localhost worker.myAppLocalInstance2.port=8209 worker.myAppLocalInstance2.lbfactor=1 worker.myAppLocalInstance2.socket_keepalive=1 worker.myAppLocalInstance2.socket_timeout=1000 worker.myAppLocalInstance3.type=ajp13 worker.myAppLocalInstance3.host=localhost worker.myAppLocalInstance3.port=8309 worker.myAppLocalInstance3.lbfactor=1 worker.myAppLocalInstance3.socket_keepalive=1 worker.myAppLocalInstance3.socket_timeout=1000 worker.myAppLocalInstance4.type=ajp13 worker.myAppLocalInstance4.host=localhost worker.myAppLocalInstance4.port=8409 worker.myAppLocalInstance4.lbfactor=1 worker.myAppLocalInstance4.socket_keepalive=1 worker.myAppLocalInstance4.socket_timeout=1000 worker.myAppRemoteInstance1.type=ajp13 worker.myAppRemoteInstance1.host=localhost worker.myAppRemoteInstance1.port=8509 worker.myAppRemoteInstance1.lbfactor=1 worker.myAppRemoteInstance1.socket_keepalive=1 worker.myAppRemoteInstance1.socket_timeout=1000 worker.myAppRemoteInstance2.type=ajp13 worker.myAppRemoteInstance2.host=localhost worker.myAppRemoteInstance2.port=8609 worker.myAppRemoteInstance2.lbfactor=1 worker.myAppRemoteInstance2.socket_keepalive=1 worker.myAppRemoteInstance2.socket_timeout=1000 worker.myAppRemoteInstance3.type=ajp13 worker.myAppRemoteInstance3.host=localhost worker.myAppRemoteInstance3.port=8709 worker.myAppRemoteInstance3.lbfactor=1 worker.myAppRemoteInstance3.socket_keepalive=1 worker.myAppRemoteInstance3.socket_timeout=1000 worker.myAppRemoteInstance4.type=ajp13 worker.myAppRemoteInstance4.host=localhost worker.myAppRemoteInstance4.port=8809 worker.myAppRemoteInstance4.lbfactor=1 worker.myAppRemoteInstance4.socket_keepalive=1 worker.myAppRemoteInstance4.socket_timeout=1000 worker.loadbalancerLocal.type=lb worker.loadbalancerLocal.sticky_session=True worker.loadbalancerLocal.balance_workers=myAppLocalInstance1,myAppLocalInstance2,myAppLocalInstance3,myAppLocalInstance4 worker.loadbalancerRemote.type=lb worker.loadbalancerRemote.balance_workers=myAppRemoteInstance1,myAppRemoteInstance2,myAppRemoteInstance3,myAppRemoteInstance4 worker.loadbalancerRemote.sticky_session=True

    Read the article

  • What is the simplest human readable configuration file format?

    - by Juha
    Current configuration file is as follows: mainwindow.title = 'test' mainwindow.position.x = 100 mainwindow.position.y = 200 mainwindow.button.label = 'apply' mainwindow.button.size.x = 100 mainwindow.button.size.y = 30 logger.datarate = 100 logger.enable = True logger.filename = './test.log' This is read with python to a nested dictionary: { 'mainwindow':{ 'button':{ 'label': {'value':'apply'}, ... }, 'logger':{ datarate: {'value': 100}, enable: {'value': True}, filename: {'value': './test.log'} }, ... } Is there a better way of doing this? The idea is to get XML type of behavior and avoid XML as long as possible. The end user is assumed almost totally computer illiterate and basically uses notepad and copy-paste. Thus the python standard "header + variables" type is considered too difficult. The dummy user edits the config file, able programmers handle the dictionaries. Nested dictionary is chosen for easy splitting (logger does not need or even cannot have/edit mainwindow parameters).

    Read the article

< Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >