Search Results

Search found 26943 results on 1078 pages for 'unknown source'.

Page 451/1078 | < Previous Page | 447 448 449 450 451 452 453 454 455 456 457 458  | Next Page >

  • Silverlight Cream for December 11, 2010 -- #1007

    - by Dave Campbell
    In this Issue: Mike Wolf, Colin Eberhardt, Mike Snow(-2-, -3-), David Kelley(-2-, -3-), Jesse Liberty(-2-), Erik Mork, Jeff Blankenburg, Laurent Duveau, and Jeremy Likness(-2-). Above the Fold: Silverlight: "The definitive guide to Notification Window in Silverlight 4" Laurent Duveau WP7: "Making the MS Adcontrol REALLY work on phone 7" David Kelley Silverlight 5: "Silverlight 5: In the Trenches" Mike Wolf From SilverlightCream.com: Silverlight 5: In the Trenches How many people can discuss Silverlight 5 'In the Trenches' ... apparently Mike Wolf can, and that's just what he's done in the post to whet your whistle (do people say that any more?) for when we can all get our hands on the bits. Visiblox, Visifire, DynamicDataDisplay – Charting Performance Comparison Colin Eberhardt responds to reader requests, and revisits his Charting Performance after also some discussion with David Anson about the Silverlight Toolkit. This time including Dynamic Data Display which is quite impressive in the ratings... check out the post and the code. Win7 Mobile Back Arrow Key Interception The simple fact is heavy bloggers rise, like Cream, to the top of my list, and I've been missing some goodness from Mike Snow... he's blogging WP7 stuff now... first up of the 'missed' ones is this one on intercepting the Back Arrow Key. Animating the Color of an Object Switching back to Silverlight in general, Mike Snow's next post is on Animating color of an object, such as text foreground. Tombstoning on the Win7 Mobile Platform And now back to WP7, Mike Snow is discussing Tombstoning... discussing the various aspects of it, and some code to use, if you haven't gotten your head around this one yet. What I tell Designers to give me... Integrating and Digital Zen David Kelley has a post up describing what he needs from designers to get his job done... I heard him discussing this at the Firestarter, and didn't realize he had written it up... these 8 items are things learned by doing, and should be discussed with your designers. Making the MS Adcontrol REALLY work on phone 7 David Kelley also has a post up discussing how to really get the Ad control working on WP7 apps... since I've seen lots of posts about this, having a definitive explanation from someone that's doing it is a good thing. Performance Optimization on Phone 7 In a break from his norm of discussing UX, David Kelley is talking about performance on WP7 devices in this post. Windows Phone From Scratch #10 – Visual State Part 2 When I saw Jesse Liberty's latest post, I realized I had missed his Part 2 of VSM for WP7 ... don't you miss it... this completes the good stuff from number 9 :) Windows Phone From Scratch #11 – Behaviors Jesse Liberty's latest Windows Phone from Scratch is up... and he's talking about Behaviors this time out... more of an overview or introduction to behaviors, but all good Show 112: Scott Guthrie on Silverlight 5 Erik Mork's latest Sparkling Client podcast is up and he was able to get some time with Scott Guthrie at the Firestarter. What I Learned in WP7 – Issue #1 Jeff Blankenburg decided to do another series, only this one isn't promised as every day... it's "What I Learned in WP7" ... and the first is up... good interesting bits found surrounding the WP7 device. The definitive guide to Notification Window in Silverlight 4 Laurent Duveau has a great post up that will have you doing Silverlight 'toast' notifications in no time... good descriptions and source. Lessons Learned in Personal Web Page Part 1: Dynamic XAML Jeremy Likness has rebuilt his personal website in Silverlight and is sharing some of that experience on his blog. This first post discusses the dynamic content. He used Jounce, of course, and included the Silverlight Navigation Framework, and... you can download all the source Lessons Learned in Personal Web Page Part 2: Enter the Matrix Jeremy Likness's second post about building his website is all about the 'Matrix' page ... pretty cool stuff... check it out... I think it looks great Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • MySQL Syslog Audit Plugin

    - by jonathonc
    This post shows the construction process of the Syslog Audit plugin that was presented at MySQL Connect 2012. It is based on an environment that has the appropriate development tools enabled including gcc,g++ and cmake. It also assumes you have downloaded the MySQL source code (5.5.16 or higher) and have compiled and installed the system into the /usr/local/mysql directory ready for use.  The information provided below is designed to show the different components that make up a plugin, and specifically an audit type plugin, and how it comes together to be used within the MySQL service. The MySQL Reference Manual contains information regarding the plugin API and how it can be used, so please refer there for more detailed information. The code in this post is designed to give the simplest information necessary, so handling every return code, managing race conditions etc is not part of this example code. Let's start by looking at the most basic implementation of our plugin code as seen below: /*    Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved.    Author:  Jonathon Coombes    Licence: GPL    Description: An auditing plugin that logs to syslog and                 can adjust the loglevel via the system variables. */ #include <stdio.h> #include <string.h> #include <mysql/plugin_audit.h> #include <syslog.h> There is a commented header detailing copyright/licencing and meta-data information and then the include headers. The two important include statements for our plugin are the syslog.h plugin, which gives us the structures for syslog, and the plugin_audit.h include which has details regarding the audit specific plugin api. Note that we do not need to include the general plugin header plugin.h, as this is done within the plugin_audit.h file already. To implement our plugin within the current implementation we need to add it into our source code and compile. > cd /usr/local/src/mysql-5.5.28/plugin > mkdir audit_syslog > cd audit_syslog A simple CMakeLists.txt file is created to manage the plugin compilation: MYSQL_ADD_PLUGIN(audit_syslog audit_syslog.cc MODULE_ONLY) Run the cmake  command at the top level of the source and then you can compile the plugin using the 'make' command. This results in a compiled audit_syslog.so library, but currently it is not much use to MySQL as there is no level of api defined to communicate with the MySQL service. Now we need to define the general plugin structure that enables MySQL to recognise the library as a plugin and be able to install/uninstall it and have it show up in the system. The structure is defined in the plugin.h file in the MySQL source code.  /*   Plugin library descriptor */ mysql_declare_plugin(audit_syslog) {   MYSQL_AUDIT_PLUGIN,           /* plugin type                    */   &audit_syslog_descriptor,     /* descriptor handle               */   "audit_syslog",               /* plugin name                     */   "Author Name",                /* author                          */   "Simple Syslog Audit",        /* description                     */   PLUGIN_LICENSE_GPL,           /* licence                         */   audit_syslog_init,            /* init function     */   audit_syslog_deinit,          /* deinit function */   0x0001,                       /* plugin version                  */   NULL,                         /* status variables        */   NULL,                         /* system variables                */   NULL,                         /* no reserves                     */   0,                            /* no flags                        */ } mysql_declare_plugin_end; The general plugin descriptor above is standard for all plugin types in MySQL. The plugin type is defined along with the init/deinit functions and interface methods into the system for sharing information, and various other metadata information. The descriptors have an internally recognised version number so that plugins can be matched against the api on the running server. The other details are usually related to the type-specific methods and structures to implement the plugin. Each plugin has a type-specific descriptor as well which details how the plugin is implemented for the specific purpose of that plugin type. /*   Plugin type-specific descriptor */ static struct st_mysql_audit audit_syslog_descriptor= {   MYSQL_AUDIT_INTERFACE_VERSION,                        /* interface version    */   NULL,                                                 /* release_thd function */   audit_syslog_notify,                                  /* notify function      */   { (unsigned long) MYSQL_AUDIT_GENERAL_CLASSMASK |                     MYSQL_AUDIT_CONNECTION_CLASSMASK }  /* class mask           */ }; In this particular case, the release_thd function has not been defined as it is not required. The important method for auditing is the notify function which is activated when an event occurs on the system. The notify function is designed to activate on an event and the implementation will determine how it is handled. For the audit_syslog plugin, the use of the syslog feature sends all events to the syslog for recording. The class mask allows us to determine what type of events are being seen by the notify function. There are currently two major types of event: 1. General Events: This includes general logging, errors, status and result type events. This is the main one for tracking the queries and operations on the database. 2. Connection Events: This group is based around user logins. It monitors connections and disconnections, but also if somebody changes user while connected. With most audit plugins, the principle behind the plugin is to track changes to the system over time and counters can be an important part of this process. The next step is to define and initialise the counters that are used to track the events in the service. There are 3 counters defined in total for our plugin - the # of general events, the # of connection events and the total number of events.  static volatile int total_number_of_calls; /* Count MYSQL_AUDIT_GENERAL_CLASS event instances */ static volatile int number_of_calls_general; /* Count MYSQL_AUDIT_CONNECTION_CLASS event instances */ static volatile int number_of_calls_connection; The init and deinit functions for the plugin are there to be called when the plugin is activated and when it is terminated. These offer the best option to initialise the counters for our plugin: /*  Initialize the plugin at server start or plugin installation. */ static int audit_syslog_init(void *arg __attribute__((unused))) {     openlog("mysql_audit:",LOG_PID|LOG_PERROR|LOG_CONS,LOG_USER);     total_number_of_calls= 0;     number_of_calls_general= 0;     number_of_calls_connection= 0;     return(0); } The init function does a call to openlog to initialise the syslog functionality. The parameters are the service to log under ("mysql_audit" in this case), the syslog flags and the facility for the logging. Then each of the counters are initialised to zero and a success is returned. If the init function is not defined, it will return success by default. /*  Terminate the plugin at server shutdown or plugin deinstallation. */ static int audit_syslog_deinit(void *arg __attribute__((unused))) {     closelog();     return(0); } The deinit function will simply close our syslog connection and return success. Note that the syslog functionality is part of the glibc libraries and does not require any external factors.  The function names are what we define in the general plugin structure, so these have to match otherwise there will be errors. The next step is to implement the event notifier function that was defined in the type specific descriptor (audit_syslog_descriptor) which is audit_syslog_notify. /* Event notifier function */ static void audit_syslog_notify(MYSQL_THD thd __attribute__((unused)), unsigned int event_class, const void *event) { total_number_of_calls++; if (event_class == MYSQL_AUDIT_GENERAL_CLASS) { const struct mysql_event_general *event_general= (const struct mysql_event_general *) event; number_of_calls_general++; syslog(audit_loglevel,"%lu: User: %s Command: %s Query: %s\n", event_general->general_thread_id, event_general->general_user, event_general->general_command, event_general->general_query ); } else if (event_class == MYSQL_AUDIT_CONNECTION_CLASS) { const struct mysql_event_connection *event_connection= (const struct mysql_event_connection *) event; number_of_calls_connection++; syslog(audit_loglevel,"%lu: User: %s@%s[%s] Event: %d Status: %d\n", event_connection->thread_id, event_connection->user, event_connection->host, event_connection->ip, event_connection->event_subclass, event_connection->status ); } }   In the case of an event, the notifier function is called. The first step is to increment the total number of events that have occurred in our database.The event argument is then cast into the appropriate event structure depending on the class type, of general event or connection event. The event type counters are incremented and details are sent via the syslog() function out to the system log. There are going to be different line formats and information returned since the general events have different data compared to the connection events, even though some of the details overlap, for example, user, thread id, host etc. On compiling the code now, there should be no errors and the resulting audit_syslog.so can be loaded into the server and ready to use. Log into the server and type: mysql> INSTALL PLUGIN audit_syslog SONAME 'audit_syslog.so'; This will install the plugin and will start updating the syslog immediately. Note that the audit plugin attaches to the immediate thread and cannot be uninstalled while that thread is active. This means that you cannot run the UNISTALL command until you log into a different connection (thread) on the server. Once the plugin is loaded, the system log will show output such as the following: Oct  8 15:33:21 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: (null)  Query: INSTALL PLUGIN audit_syslog SONAME 'audit_syslog.so' Oct  8 15:33:21 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: Query  Query: INSTALL PLUGIN audit_syslog SONAME 'audit_syslog.so' Oct  8 15:33:40 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: (null)  Query: show tables Oct  8 15:33:40 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: Query  Query: show tables Oct  8 15:33:43 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: (null)  Query: select * from t1 Oct  8 15:33:43 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: Query  Query: select * from t1 It appears that two of each event is being shown, but in actuality, these are two separate event types - the result event and the status event. This could be refined further by changing the audit_syslog_notify function to handle the different event sub-types in a different manner.  So far, it seems that the logging is working with events showing up in the syslog output. The issue now is that the counters created earlier to track the number of events by type are not accessible when the plugin is being run. Instead there needs to be a way to expose the plugin specific information to the service and vice versa. This could be done via the information_schema plugin api, but for something as simple as counters, the obvious choice is the system status variables. This is done using the standard structure and the declaration: /*  Plugin status variables for SHOW STATUS */ static struct st_mysql_show_var audit_syslog_status[]= {   { "Audit_syslog_total_calls",     (char *) &total_number_of_calls,     SHOW_INT },   { "Audit_syslog_general_events",     (char *) &number_of_calls_general,     SHOW_INT },   { "Audit_syslog_connection_events",     (char *) &number_of_calls_connection,     SHOW_INT },   { 0, 0, SHOW_INT } };   The structure is simply the name that will be displaying in the mysql service, the address of the associated variables, and the data type being used for the counter. It is finished with a blank structure to show that there are no more variables. Remember that status variables may have the same name for variables from other plugin, so it is considered appropriate to add the plugin name at the start of the status variable name to avoid confusion. Looking at the status variables in the mysql client shows something like the following: mysql> show global status like "audit%"; +--------------------------------+-------+ | Variable_name                  | Value | +--------------------------------+-------+ | Audit_syslog_connection_events | 1     | | Audit_syslog_general_events    | 2     | | Audit_syslog_total_calls       | 3     | +--------------------------------+-------+ 3 rows in set (0.00 sec) The final connectivity piece for the plugin is to allow the interactive change of the logging level between the plugin and the system. This requires the ability to send changes via the mysql service through to the plugin. This is done using the system variables interface and defining a single variable to keep track of the active logging level for the facility. /* Plugin system variables for SHOW VARIABLES */ static MYSQL_SYSVAR_STR(loglevel, audit_loglevel,                         PLUGIN_VAR_RQCMDARG,                         "User can specify the log level for auditing",                         audit_loglevel_check, audit_loglevel_update, "LOG_NOTICE"); static struct st_mysql_sys_var* audit_syslog_sysvars[] = {     MYSQL_SYSVAR(loglevel),     NULL }; So now the system variable 'loglevel' is defined for the plugin and associated to the global variable 'audit_loglevel'. The check or validation function is defined to make sure that no garbage values are attempted in the update of the variable. The update function is used to save the new value to the variable. Note that the audit_syslog_sysvars structure is defined in the general plugin descriptor to associate the link between the plugin and the system and how much they interact. Next comes the implementation of the validation function and the update function for the system variable. It is worth noting that if you have a simple numeric such as integers for the variable types, the validate function is often not required as MySQL will handle the automatic check and validation of simple types. /* longest valid value */ #define MAX_LOGLEVEL_SIZE 100 /* hold the valid values */ static const char *possible_modes[]= { "LOG_ERROR", "LOG_WARNING", "LOG_NOTICE", NULL };  static int audit_loglevel_check(     THD*                        thd,    /*!< in: thread handle */     struct st_mysql_sys_var*    var,    /*!< in: pointer to system                                         variable */     void*                       save,   /*!< out: immediate result                                         for update function */     struct st_mysql_value*      value)  /*!< in: incoming string */ {     char buff[MAX_LOGLEVEL_SIZE];     const char *str;     const char **found;     int length;     length= sizeof(buff);     if (!(str= value->val_str(value, buff, &length)))         return 1;     /*         We need to return a pointer to a locally allocated value in "save".         Here we pick to search for the supplied value in an global array of         constant strings and return a pointer to one of them.         The other possiblity is to use the thd_alloc() function to allocate         a thread local buffer instead of the global constants.     */     for (found= possible_modes; *found; found++)     {         if (!strcmp(*found, str))         {             *(const char**)save= *found;             return 0;         }     }     return 1; } The validation function is simply to take the value being passed in via the SET GLOBAL VARIABLE command and check if it is one of the pre-defined values allowed  in our possible_values array. If it is found to be valid, then the value is assigned to the save variable ready for passing through to the update function. static void audit_loglevel_update(     THD*                        thd,        /*!< in: thread handle */     struct st_mysql_sys_var*    var,        /*!< in: system variable                                             being altered */     void*                       var_ptr,    /*!< out: pointer to                                             dynamic variable */     const void*                 save)       /*!< in: pointer to                                             temporary storage */ {     /* assign the new value so that the server can read it */     *(char **) var_ptr= *(char **) save;     /* assign the new value to the internal variable */     audit_loglevel= *(char **) save; } Since all the validation has been done already, the update function is quite simple for this plugin. The first part is to update the system variable pointer so that the server can read the value. The second part is to update our own global plugin variable for tracking the value. Notice that the save variable is passed in as a void type to allow handling of various data types, so it must be cast to the appropriate data type when assigning it to the variables. Looking at how the latest changes affect the usage of the plugin and the interaction within the server shows: mysql> show global variables like "audit%"; +-----------------------+------------+ | Variable_name         | Value      | +-----------------------+------------+ | audit_syslog_loglevel | LOG_NOTICE | +-----------------------+------------+ 1 row in set (0.00 sec) mysql> set global audit_syslog_loglevel="LOG_ERROR"; Query OK, 0 rows affected (0.00 sec) mysql> show global status like "audit%"; +--------------------------------+-------+ | Variable_name                  | Value | +--------------------------------+-------+ | Audit_syslog_connection_events | 1     | | Audit_syslog_general_events    | 11    | | Audit_syslog_total_calls       | 12    | +--------------------------------+-------+ 3 rows in set (0.00 sec) mysql> show global variables like "audit%"; +-----------------------+-----------+ | Variable_name         | Value     | +-----------------------+-----------+ | audit_syslog_loglevel | LOG_ERROR | +-----------------------+-----------+ 1 row in set (0.00 sec)   So now we have a plugin that will audit the events on the system and log the details to the system log. It allows for interaction to see the number of different events within the server details and provides a mechanism to change the logging level interactively via the standard system methods of the SET command. A more complex auditing plugin may have more detailed code, but each of the above areas is what will be involved and simply expanded on to add more functionality. With the above skeleton code, it is now possible to create your own audit plugins to implement your own auditing requirements. If, however, you are not of the coding persuasion, then you could always consider the option of the MySQL Enterprise Audit plugin that is available to purchase.

    Read the article

  • 5 Steps to getting started with IronRuby

    - by Eric Nelson
    IronRuby is a Open Source implementation of the Ruby programming language for .NET, heavily relying on Microsoft's Dynamic Language Runtime. The project's #1 goal is to be a true Ruby implementation, meaning it runs existing Ruby code. Check out this summary of using the Ruby standard library and 3rd party libraries in IronRuby. IronRuby has tight integration with .NET, so any .NET types can be used from IronRuby and the IronRuby runtime can be embedded into any .NET application. These 5 steps should get you nicely up and running on IronRuby – OR … you could just watch a video session from the lead developer which took place earlier this month (March 2010 - 60mins). But the 5 steps will be quicker :-) Step 1 – Install IronRuby :-) You can install IronRuby automatically using an MSI or manually. For simplicity I would recommend the MSI install. TIP: As of the 25th of March IronRuby has not quite shipped. The download above is a Release Candidate (RC) which means it is still undergoing final testing by the team. You will need to uninstall this version (RC3) once the final release is available. The good news is that uninstalling IronRuby RC3 will work without a hitch as the MSI does relatively little. Step 2 – Install an IronRuby friendly editor You will need to Install an editor to work with IronRuby as there is no designer support for IronRuby inside Visual Studio. There are many editors to choose from but I would recommend you either went with: SciTE (Download the MSI): This is a lightweight text editor which is simple to get up and running. SciTE understands Ruby syntax and allows you to easily run IronRuby code within the editor with a small change to the config file. SharpDevelop 3.2 (Download the MSI): This is an open source development environment for C#, VB, Boo and now IronRuby. IronRuby support is new but it does include integrated debugging. You might also want to check out the main site for SharpDevelop. TIP: There are commercial tools for Ruby development which offer richer support such as intellisense.. They can be coerced into working with IronRuby. A good one to start with is RubyMine which needs some small changes to make it work with IronRuby. Step 3 – Run the IronRuby Tutorial Run through the IronRuby tutorial which is included in the IronRuby download. It covers off the basics of the Ruby languages and how IronRuby integrates with .NET. In a typical install it will end up at C:\Program Files\IronRuby 0.9.4.0\Samples\Tutorial. Which will give you the tutorial implemented in .NET and Ruby. TIP: You might also want to check out these two introductory posts Using IronRuby and .NET to produce the ‘Hello World of WPF’ and What's IronRuby, and how do I put it on Rails? Step 4 – Get some good books to read Get a great book on Ruby and IronRuby. There are several free ebooks on Ruby which will help you learn the language. The little book of Ruby is a good place to start. I would also recommend you purchase IronRuby Unleashed (Buy on Amazon UK | Buy on Amazon USA). You might also want to check out this mini-review. Other books are due out soon including IronRuby in Action. TIP: Also check out the official documentation for using .NET from IronRuby. Step 5 – Keep an eye on the team blogs Keep an eye on the IronRuby team blogs including Jimmy Schementi, Jim Deville and Tomas Matousek (full list) TIP: And keep a watch out for the final release of IronRuby – due anytime soon!

    Read the article

  • Text Expansion Awareness for UX Designers: Points to Consider

    - by ultan o'broin
    Awareness of translated text expansion dynamics is important for enterprise applications UX designers (I am assuming all source text for translation is in English, though apps development can takes place in other natural languages too). This consideration goes beyond the standard 'character multiplication' rule and must take into account the avoidance of other layout tricks that a designer might be tempted to try. Follow these guidelines. For general text expansion, remember the simple rule that the shorter the word is in the English, the longer it will need to be in English. See the examples provided by Richard Ishida of the W3C and you'll get the idea. So, forget the 30 percent or one inch minimum expansion rule of the old Forms days. Unfortunately remembering convoluted text expansion rules, based as a percentage of the US English character count can be tough going. Try these: Up to 10 characters: 100 to 200% 11 to 20 characters: 80 to 100% 21 to 30 characters: 60 to 80% 31 to 50 characters: 40 to 60% 51 to 70 characters: 31 to 40% Over 70 characters: 30% (Source: IBM) So it might be easier to remember a rule that if your English text is less than 20 characters then allow it to double in length (200 percent), and then after that assume an increase by half the length of the text (50%). (Bear in mind that ADF can apply truncation rules on some components in English too). (If your text is stored in a database, developers must make sure the table column widths can accommodate the expansion of your text when translated based on byte size for the translated character and not numbers of characters. Use Unicode. One character does not equal one byte in the multilingual enterprise apps world.) Rely on a graceful transformation of translated text. Let all pages to resize dynamically so the text wraps and flow naturally. ADF pages supports this already. Think websites. Don't hard-code alignments. Use Start and End properties on components and not Left or Right. Don't force alignments of components on the page by using texts of a certain length as spacers. Use proper label positioning and anchoring in ADF components or other technologies. Remember that an increase in text length means an increase in vertical space too when pages are resized. So don't hard-code vertical heights for any text areas. Don't be tempted to manually create text or printed reports this way either. They cannot be translated successfully, and are very difficult to maintain in English. Use XML, HTML, RTF and so on. Check out what Oracle BI Publisher offers. Don't force wrapping by using tricks such as /n or /t characters or HTML BR tags or forced page breaks. Once the text is translated the alignment will be destroyed. The position of the breaking character or tag would need to be moved anyway, or even removed. When creating tables, then use table components. Don't use manually created tables that reply on word length to maintain column and row alignment. For example, don't use codeblock elements in HTML; use the proper table elements instead. Once translated, the alignment of manually formatted tabular data is destroyed. Finally, if there is a space restriction, then don't use made-up acronyms, abbreviations or some form of daft text speak to save space. Besides being incomprehensible in English, they may need full translations of the shortened words, even if they can be figured out. Use approved or industry standard acronyms according to the UX style rules, not as a space-saving device. Restricted Real Estate on Mobile Devices On mobile devices real estate is limited. Using shortened text is fine once it is comprehensible. Users in the mobile space prefer brevity too, as they are on the go, performing three-minute tasks, with no time to read lengthy texts. Using fragments and lightning up on unnecessary articles and getting straight to the point with imperative forms of verbs makes sense both on real estate and user experience grounds.

    Read the article

  • System can't sleep, can't shutdown (might be networking related)

    - by Kevin Q
    I speculate it's networking related since I could only shut down my system if I do /etc/init.d/networking stop first. If I tried to put the system to sleep, it disconnect the network then reconnect it automatically, never goes to sleep. I uninstalled the network manager, then /etc/init.d/networking stop will not help shutdown the system anymore. When I boot into the OS, there were some warning about Unknown job:S20acpid,S20network-interface, S20network-manager . I got some advices to do update-rc.d -f to remove all those script, which I am not certain if it's right thing to do. The messages are gone but the problem remains. When I tried to restart, I used to get this message It doesn't seem to be the modem manager since I later removed it. Once upon a time, the fronze shutting down/restarting can be restarted when I presses Ctrl+Alt+Delete, and it says something like networking is stopped externally. If I log in single user mode, and shutdown or reboot, it show this screen I also tried GRUB_CMDLINE_LINUX_DEFAULT = "acpi=force" This is a Ubuntu 12.04 running on Thinkpad T520, not new installation, no such problems occurred before.

    Read the article

  • SQL SERVER – Get All the Information of Database using sys.databases

    - by pinaldave
    Earlier I wrote blog article SQL SERVER – Finding Last Backup Time for All Database. In the response of this article I have received very interesting script from SQL Server Expert Matteo as a comment in the blog. He has written script using sys.databases which provides plenty of the information about database. I suggest you can run this on your database and know unknown of your databases as well. SELECT database_id, CONVERT(VARCHAR(25), DB.name) AS dbName, CONVERT(VARCHAR(10), DATABASEPROPERTYEX(name, 'status')) AS [Status], state_desc, (SELECT COUNT(1) FROM sys.master_files WHERE DB_NAME(database_id) = DB.name AND type_desc = 'rows') AS DataFiles, (SELECT SUM((size*8)/1024) FROM sys.master_files WHERE DB_NAME(database_id) = DB.name AND type_desc = 'rows') AS [Data MB], (SELECT COUNT(1) FROM sys.master_files WHERE DB_NAME(database_id) = DB.name AND type_desc = 'log') AS LogFiles, (SELECT SUM((size*8)/1024) FROM sys.master_files WHERE DB_NAME(database_id) = DB.name AND type_desc = 'log') AS [Log MB], user_access_desc AS [User access], recovery_model_desc AS [Recovery model], CASE compatibility_level WHEN 60 THEN '60 (SQL Server 6.0)' WHEN 65 THEN '65 (SQL Server 6.5)' WHEN 70 THEN '70 (SQL Server 7.0)' WHEN 80 THEN '80 (SQL Server 2000)' WHEN 90 THEN '90 (SQL Server 2005)' WHEN 100 THEN '100 (SQL Server 2008)' END AS [compatibility level], CONVERT(VARCHAR(20), create_date, 103) + ' ' + CONVERT(VARCHAR(20), create_date, 108) AS [Creation date], -- last backup ISNULL((SELECT TOP 1 CASE TYPE WHEN 'D' THEN 'Full' WHEN 'I' THEN 'Differential' WHEN 'L' THEN 'Transaction log' END + ' – ' + LTRIM(ISNULL(STR(ABS(DATEDIFF(DAY, GETDATE(),Backup_finish_date))) + ' days ago', 'NEVER')) + ' – ' + CONVERT(VARCHAR(20), backup_start_date, 103) + ' ' + CONVERT(VARCHAR(20), backup_start_date, 108) + ' – ' + CONVERT(VARCHAR(20), backup_finish_date, 103) + ' ' + CONVERT(VARCHAR(20), backup_finish_date, 108) + ' (' + CAST(DATEDIFF(second, BK.backup_start_date, BK.backup_finish_date) AS VARCHAR(4)) + ' ' + 'seconds)' FROM msdb..backupset BK WHERE BK.database_name = DB.name ORDER BY backup_set_id DESC),'-') AS [Last backup], CASE WHEN is_fulltext_enabled = 1 THEN 'Fulltext enabled' ELSE '' END AS [fulltext], CASE WHEN is_auto_close_on = 1 THEN 'autoclose' ELSE '' END AS [autoclose], page_verify_option_desc AS [page verify option], CASE WHEN is_read_only = 1 THEN 'read only' ELSE '' END AS [read only], CASE WHEN is_auto_shrink_on = 1 THEN 'autoshrink' ELSE '' END AS [autoshrink], CASE WHEN is_auto_create_stats_on = 1 THEN 'auto create statistics' ELSE '' END AS [auto create statistics], CASE WHEN is_auto_update_stats_on = 1 THEN 'auto update statistics' ELSE '' END AS [auto update statistics], CASE WHEN is_in_standby = 1 THEN 'standby' ELSE '' END AS [standby], CASE WHEN is_cleanly_shutdown = 1 THEN 'cleanly shutdown' ELSE '' END AS [cleanly shutdown] FROM sys.databases DB ORDER BY dbName, [Last backup] DESC, NAME Please let me know if you find this information useful. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Displaying JSON in your Browser

    - by Rick Strahl
    Do you work with AJAX requests a lot and need to quickly check URLs for JSON results? Then you probably know that it’s a fairly big hassle to examine JSON results directly in the browser. Yes, you can use FireBug or Fiddler which work pretty well for actual AJAX requests, but if you just fire off a URL for quick testing in the browser you usually get hit by the Save As dialog and the download manager, followed by having to open the saved document in a text editor in FireFox. Enter JSONView which allows you to simply display JSON results directly in the browser. For example, imagine I have a URL like this: http://localhost/westwindwebtoolkitweb/RestService.ashx?Method=ReturnObject&format=json&Name1=Rick&Name2=John&date=12/30/2010 typed directly into the browser and that that returns a complex JSON object. With JSONView the result looks like this: No fuss, no muss. It just works. Here the result is an array of Person objects that contain additional address child objects displayed right in the browser. JSONView basically adds content type checking for application/json results and when it finds a JSON result takes over the rendering and formats the display in the browser. Note that it re-formats the raw JSON as well for a nicer display view along with collapsible regions for objects. You can still use View Source to see the raw JSON string returned. For me this is a huge time-saver. As I work with AJAX result data using GET and REST style URLs quite a bit it’s a big timesaver. To quickly and easily display JSON is a key feature in my development day and JSONView for all its simplicity fits that bill for me. If you’re doing AJAX development and you often review URL based JSON results do yourself a favor and pick up a copy of JSONView. Other Browsers JSONView works only with FireFox – what about other browsers? Chrome Chrome actually displays raw JSON responses as plain text without any plug-ins. There’s no plug-in or configuration needed, it just works, although you won’t get any fancy formatting. [updated from comments] There’s also a port of JSONView available for Chrome from here: https://chrome.google.com/webstore/detail/chklaanhfefbnpoihckbnefhakgolnmc It looks like it works just about the same as the JSONView plug-in for FireFox. Thanks for all that pointed this out… Internet Explorer Internet Explorer probably has the worst response to JSON encoded content: It displays an error page as it apparently tries to render JSON as XML: Yeah that seems real smart – rendering JSON as an XML document. WTF? To get at the actual JSON output, you can use View Source. To get IE to display JSON directly as text you can add a Mime type mapping in the registry:   Create a new application/json key in: HKEY_CLASSES_ROOT\MIME\Database\ContentType\application/json Add a string value of CLSID with a value of {25336920-03F9-11cf-8FD0-00AA00686F13} Add a DWORD value of Encoding with a value of 80000 I can’t take credit for this tip – found it here first on Sky Sander’s Blog. Note that the CLSID can be used for just about any type of text data you want to display as plain text in the IE. It’s the in-place display mechanism and it should work for most text content. For example it might also be useful for looking at CSS and JS files inside of the browser instead of downloading those documents as well. © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  AJAX  

    Read the article

  • Best WordPress Shopping Cart & Ecommerce Plugins

    - by Edward
    A versatile WordPress Shopping Cart plugin can help you create a feature-rich online store on your WordPress-powered website or blog. Some are so advanced that you can get your store up and running in minutes. Some plugins allow you to take ecommerce to a next level with their high end customization tools. Here is a list of best WP shopping cart plugins available: Cart66 One of the best WordPress plugin with lots of features, great quality and ease of use. It accepts few more payment getways such as PayPal Website Payments Standard, PayPal Website Payments Professional, PayPal Express Checkout, eProcessing Network etc. It has flexible design options, recurring payments for subscriptions, memberships, and payment plans, Easy PCI Compliance – Safe and Secure. It is fast and efficient, one can sell digital and physical products and support is good. Price: Standard $49 & Professional $99 Details Download StorePress StorePress is a WordPress theme, which is fully coded. It comes with scripts that can change a WordPress blog into a veritable e-commerce virtual store. With this great premium WordPress theme, one can start affiliate stores, or promote affiliate products. Price: Single $59.99 & Developer License $119.99 Details Download WordPress eStore Plugin This shopping cart plugin comes with easy checkout, ease of design and use, automatic instant digital product delivery, Next Gen gallery integration, autoresponder integration etc. It is a lightweight shopping cart and allows multi site license. This plugin offers an amazingly comprehensive toolkit that will ensure your online shop is almost just plug-and-play. Price: $49.99 Details Download Shoppers Press Shoppers press is a premium cart for Word Press that comes with 20+ to choose from and 20+ built in payment gateways. It features one-click setups, personalized user accounts, easy management tools, detailed sales tracking, promotional options, a variety of product import tools, and many more features Price:$79 Details Download WordPress Shopping Cart plugin The WordPress Shopping Cart plugin by Tribulant quickly and seamlessly integrates an online shop with a fully functional shopping cart interface into any WordPress website. It has easy to use interface, which enables set up of multiple products and categorize and organizing them into multiple product categories. It also has many more attractive features. Price: $49.99 Details Download WP e-commerce WP e-commerce is a free full-featured shopping cart plugin for WordPress. It is a full featured shopping cart and boasts of easy checkout. It offers a wide range of features including SSL compatibility, customization and merchandising, integrated payment processing solutions including manual payment, Google Checkout and PayPal Payments, and email marketing. It is wordpress and social networking integrated. It is customizable by use of PHP template tag, wordpress shortcode and widgets. Details Download YAK for WordPress YAK is an open source shopping cart plugin for WordPress. It associates products with weblog entries (in other words, posts), so the post ID also becomes the product code. It supports both pages and posts as products, handles different types of product through categories. YAK supports downloadable products, so any e-books, plugins, or zip files you’re marketing can be easily purchased and dowloaded. Details Download Market Press It is another shopping cart full of many features. It offers following features such as assign categories and tags to products to make them easy to find, stock tracking with alerts, order management/alerts, fully customizable email messages, full support for most major currencies, fully customizable store urls/slugs, customers can checkout without being a site user etc. Expensive, but good option for those who can afford it. Price: $17.42/month Details Download Shopp It is an excellent shopping cart plugin for Word Press. This plugin is extremely easy to install and use. It has a cleaner interface. The customer support is good. Use can easily customize the look of the cart by using its amazing features. Price: $55 Details Download Related posts:8 PHP Shopping Cart Software for Reliable Ecommerce Solution Shopping Cart SEO 8 Free Open Source Shopping Carts

    Read the article

  • ASP.NET Server-side comments

    - by nmarun
    I believe a good number of you know about Server-side commenting. This blog is just like a revival to refresh your memories. When you write comments in your .aspx/.ascx files, people usually write them as: 1: <!-- This is a comment. --> To show that it actually makes a difference for using the server-side commenting technique, I’ve started a web application project and my default.aspx page looks like this: 1: <%@ Page Title="Home Page" Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="ServerSideComment._Default" %> 2: <asp:Content ID="HeaderContent" runat="server" ContentPlaceHolderID="HeadContent"> 3: </asp:Content> 4: <asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent"> 5: <h2> 6: <!-- This is a comment --> 7: Welcome to ASP.NET! 8: </h2> 9: <p> 10: To learn more about ASP.NET visit <a href="http://www.asp.net" title="ASP.NET Website">www.asp.net</a>. 11: </p> 12: <p> 13: You can also find <a href="http://go.microsoft.com/fwlink/?LinkID=152368&amp;clcid=0x409" 14: title="MSDN ASP.NET Docs">documentation on ASP.NET at MSDN</a>. 15: </p> 16: </asp:Content> See the comment in line 6 and when I run the app, I can do a view source on the browser which shows up as: 1: <h2> 2: <!-- This is a comment --> 3: Welcome to ASP.NET! 4: </h2> Using Fiddler shows the page size as: Let’s change the comment style and use server-side commenting technique. 1: <h2> 2: <%-- This is a comment --%> 3: Welcome to ASP.NET! 4: </h2> Upon rendering, the view source looks like: 1: <h2> 2: 3: Welcome to ASP.NET! 4: </h2> Fiddler now shows the page size as: The difference is that client-side comments are ignored by the browser, but they are still sent down the pipe. With server-side comments, the compiler ignores everything inside this block. Visual Studio’s Text Editor toolbar also puts comments as server-side ones. If you want to give it a shot, go to your design page and press Ctrl+K, Ctrl+C on some selected text and you’ll see it commented in the server-side commenting style.

    Read the article

  • Laptop with Intel Graphics: external VGA monitor only gets signal on boot (no "hot plugging")

    - by iveand
    I am able to get an external VGA monitor (or projector) to work if I start my laptop with it connected. However, if I start the laptop with it disconnected there is no signal on the external. The Displays screen shows the external, and thinks that it is active, but there is no signal being sent to it. This has been a persistent problem since 10.04 (I am now on 12.04.... each upgrade hoping something is improved). I should note that even when it works (starting with display connected), Displays still says the monitor is "unknown" (but it sends the signal). For the correct resolution to display, I have had to add a few xrandr lines for my monitor to my .xprofile file... otherwise resolution is limited to default 1024x768. So, resolution issues can be worked around, but the main issue is that the external doesn't get a signal without starting the machine with it connected. I have tried: adding i915.modeset=1 to grub (also i965.modeset=1 since someone posted that this helped even though lshw shows i915) adding following ppa and doing a dist-upgrade: sudo add-apt-repository ppa:xorg-edgers/ppa Here are the details: Laptop: Toshiba Tecra M10 lspci listings for video: 00:02.0 VGA compatible controller [0300]: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller [8086:2a42] (rev 07) sudo lshw -C video listing: *-display:0 description: VGA compatible controller product: Mobile 4 Series Chipset Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 07 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:46 memory:ff400000-ff7fffff memory:e0000000-efffffff ioport:cff8(size=8) *-display:1 UNCLAIMED description: Display controller product: Mobile 4 Series Chipset Integrated Graphics Controller vendor: Intel Corporation physical id: 2.1 bus info: pci@0000:00:02.1 version: 07 width: 64 bits clock: 33MHz capabilities: pm bus_master cap_list configuration: latency=0 resources: memory:ffc00000-ffcfffff "System Info" shows my graphics as the following Mobile Intel® GM45 Express Chipset x86/MMX/SSE2

    Read the article

  • Where Next for Google Translate? And What of Information Quality?

    - by ultan o'broin
    Fascinating article in the UK Guardian newspaper called Can Google break the computer language barrier? In it, Andreas Zollman, who works on Google Translate, comments that the quality of Google Translate's output relative to the amount of data required to create that output is clearly now falling foul of the law of diminishing returns. He says: "Each doubling of the amount of translated data input led to about a 0.5% improvement in the quality of the output," he suggests, but the doublings are not infinite. "We are now at this limit where there isn't that much more data in the world that we can use," he admits. "So now it is much more important again to add on different approaches and rules-based models." The Translation Guy has a further discussion on this, called Google Translate is Finished. He says: "And there aren't that many doublings left, if any. I can't say how much text Google has assimilated into their machine translation databases, but it's been reported that they have scanned about 11% of all printed content ever published. So double that, and double it again, and once more, shoveling all that into the translation hopper, and pretty soon you get the sum of all human knowledge, which means a whopping 1.5% improvement in the quality of the engines when everything has been analyzed. That's what we've got to look forward to, at best, since Google spiders regularly surf the Web, which in its vastness dwarfs all previously published content. So to all intents and purposes, the statistical machine translation tools of Google are done. Outstanding job, Googlers. Thanks." Surprisingly, all this analysis hasn't raised that much comment from the fans of machine translation, or its detractors either for that matter. Perhaps, it's the season of goodwill? What is clear to me, however, of course is that Google Translate isn't really finished (in any sense of the word). I am sure Google will investigate and come up with new rule-based translation models to enhance what they have already and that will also scale effectively where others didn't. So too, will they harness human input, which really is the way to go to train MT in the quality direction. But that aside, what does it say about the quality of the data that is being used for statistical machine translation in the first place? From the Guardian article it's clear that a huge humanly translated corpus drove the gains for Google Translate and now what's left is the dregs of badly translated and poorly created source materials that just can't deliver quality translations. There's a message about information quality there, surely. In the enterprise applications space, where we have some control over content this whole debate reinforces the relationship between information quality at source and translation efficiency, regardless of the technology used to do the translation. But as more automation comes to the fore, that information quality is even more critical if you want anything approaching a scalable solution. This is important for user experience professionals. Issues like user generated content translation, multilingual personalization, and scalable language quality are central to a superior global UX; it's a competitive issue we cannot ignore.

    Read the article

  • Things to install on a new machine – revisited

    - by RoyOsherove
    as I prepare to get a new dev machine at work, I write the things I am going to install on it, before writing the first line of code on that machine: Control Freak Tools: Everything Search Engine – a free and amazingly fast search engine for files all over your machine. (just file names, not inside files). This is so fast I use it almost as a replacement for my start menu, but it’s also great for finding those files that get hidden and tucked away in dark places on my system. Ever had a situation where you needed to see exactly how many copies of X.dll were hiding on your machine and where? this tool is perfect for that. Google Chrome. It’s just fast. very fast. and Firefox has become the IE of alternative browsers in terms of speed and memory. Don’t even get me started on IE. TweetDeck – get a complete view of what’s up on twitter Total Commander – my still favorite file manager, over five years now. KatMouse – will scroll any window your hovering on, even if it’s not an active window, when you use scroll the wheel on it. PowerIso or Daemon Tools – for loading up ISO images of discs LogMeIn Ignition – quick access to your LogMeIn computers for online Backup: JungleDisk or BackBlaze KeePass – save important passwords MS Security Essentials – free anti virus that’s quoest and doesn’t make a mess of your system. for home: uTorrent – a torrent client that can read rss feeds (like the ones from ezrss.it ) Camtasia Studio and SnagIt – for recording and capturing the screen, and then adding cool effects on top. Foxit PDF Reader – much faster that adove reader. Toddler Keys (for home) – for when your baby wants to play with your keyboard. Live Writer – for writing blog posts for Lenovo ThinkPads – Lenovo System Update – if you have a “custom” system instead of the one that came built in, this will keep all your lenovo drivers up to date. FileZilla – for FTP stuff All the utils from sysinternals, (or try the live-links) especially: AutoRuns for deciding what’s really going to load at startup, procmon to see what’s really going on with processes in your system   Developer stuff: Reflector. Pure magic. Time saver. See source code of any compiled assembly. Resharper. Great for productivity and navigation across your source code FinalBuilder – a commercial build automation tool. Love it. much better than any xml based time hog out there. TeamCity – a great visual and friendly server to manage continuous integration. powerful features. Test Lint – a free addin for vs 2010 I helped create, that checks your unit tests for possible problems and hints you about it. TestDriven.NET – a great test runner for vs 2008 and 2010 with some powerful features. VisualSVN – a commercial tool if you use subversion. very reliable addin for vs 2008 and 2010 Beyond Compare – a powerful file and directory comparison tool. I love the fact that you can right click in windows exporer on any file and select “select left side to compare”, then right click on another file and select “compare with left side”. Great usability thought! PostSharp 2.0 – for addind system wide concepts into your code (tracing, exception management). Goes great hand in hand with.. SmartInspect – a powerful framework and viewer for tracing for your application. lots of hidden features. Crypto Obfuscator – a relatively new obfuscation tool for .NET that seems to do the job very well. Crypto Licensing – from the same company –finally a licensing solution that seems to really fit what I needed. And it works. Fiddler 2 – great for debugging and tracing http traffic to and from your app. Debugging Tools for Windows and DebugDiag  - great for debugging scenarios. still wanting more? I think this should keep you busy for a while.   Regulator and Regulazy – for testing and generating regular expressions Notepad 2 – for quick editing and viewing with syntax highlighting

    Read the article

  • Christian Radio Locator iPhone app

    - by Tim Hibbard
    For the last three months or so I've been working on an iPhone (and iPad) app in my spare time. It all started when I took the kids to Minneapolis and had a hard time finding radio stations to listen to on the trip. I looked in the App Store for an app that would use my GPS to show me Christian radio stations nearby, but there wasn't one. So I decided to build my own. Using public information from the FCC and a few other sources, I built a database in Google docs that contains the frequency for all Christian radio stations, where the tower is located and how far the tower can reach. I also included any streaming audio information and other contact information like Facebook or Twitter that I could find. Google spreadsheets publish in JSON format (yes, really) and Xcode can automatically deserialize JSON into a properly formatted entity. This is one area that Xcode is far superior to C#. In a just a few lines of code, I can have a list of in-memory strongly typed objects from a web-based JSON feed. To accomplish the same thing natively in .NET would be much more work and wouldn't feel nearly as clean when it was said and done. The snazzy icon shown above was built by my very talented wife. She hasn't yet provided any feedback on the app's user interface, which is why it is so plain and boring. I used a navigation view controller and EGO pull to refresh table view to construct the main window. Pulling down to refresh initiates a GPS lookup, which queries the database for radio stations in range (yes, you can pass parameters to Google spreadsheets and get a subset back in JSON). Pulling up on the table extends the range of the search and includes stations that may not be close enough to get clear audio. This feature is not that intuitive and the next version contains an update to that functionality. Tapping a cell will show a detail view that displays additional information about the station. The user can click to view the station on a map, click to listen to an online stream (if available) or click to see the station's Facebook or Twitter pages. Swiping back and forth on the table changes the information that is displayed on the right hand side of the table cell. It scrolls through the city where the tower is located, how far the phone is from the tower, the range of the tower and in the next version a signal strength indicator. This was pretty easy to implement once I figured out how to assign the gesture recognizer delegate.  Tapping and holding on a cell will jump the user to the map view screen. Which is pretty cool, but very hard for even a power user to discover. To tackle the issue of discoverability, the next version has a series of instructions displayed at the bottom of the screen to show the user the various shortcuts. Once the user has performed the swipes and long holds, the instructions disappear. I've learned a lot developing this app. Spending over a decade exclusively in .NET made the learning curve a bit steep, but once I learned the structure and syntax of Objective-C, I've learned to appreciate the power and simplicity of it. Here are a few screenshots. I would really appreciate any feedback and especially iTunes reviews. Technically it is open source and a smart googler could probably find it. I just haven't promoted it as open source.     Cross posted from timhibbard.com

    Read the article

  • Creating typed WSDL’s for generic WCF services of the ESB Toolkit

    - by charlie.mott
    source: http://geekswithblogs.net/charliemott Question How do you make it easy for client systems to consume the generic WCF services exposed by the ESB Toolkit using messages that conform to agreed schemas\contracts?  Usually the developer of a system consuming a web service adds a service reference using a WSDL. However, the WSDL’s for the generic services exposed by the ESB Toolkit do not make it easy to develop clients that conform to agreed schemas\contracts. Recommendation Take a copy of the generic WSDL’s and modify it to use the proper contracts. This is very easy.  It will work with the generic on ramps so long as the <part>?</part> wrapping is removed from the WCF adapter configuration in the BizTalk receive locations.  Attempting to create a WSDL where the input and output messages are sent/returned with a <part> wrapper is a nightmare.  I have not managed it.  Consequences I can only see the following consequences of removing the <part> wrapper: ESB Test Client – I needed to modify the out-of-the-box ESB Test Client source code to make it send non-wrapped messages.  Flat file formatted messages – the endpoint will no longer support flat file message formats.  However, even if you needed to support this integration pattern through WCF, you would most-likely want to create a separate receive location anyway with its’ own independently configured XML disassembler pipeline component. Instructions These steps show how to implement a request-response implementation of this. WCF Receive Locations In BizTalk, for the WCF receive location for the ESB on-ramp, set the adapter Message settings\bindings to “UseBodyPath”: Inbound BizTalk message body  = Body Outbound WCF message body = Body Create a WSDL’s for each supported integration use-case Save a copy of the WSDL for the WCF generic receive location above that you intend the client system to use. Give it a name that mirrors the interface agreement (e.g. Esb_SuppliersSearchCommand_wsHttpBinding.wsdl).   Add any xsd schemas files imported below to this same folder.   Edit the WSDL to import schemas For example, this: <xsd:schema targetNamespace=http://microsoft.practices.esb/Imports /> … would become something like: <xsd:schema targetNamespace="http://microsoft.practices.esb/Imports">     <xsd:import schemaLocation="SupplierSearchCommand_V1.xsd"                            namespace="http://schemas.acme.co.uk/suppliersearchcommand/1.0"/>     <xsd:import  schemaLocation="SuppliersDocument_V1.xsd"                              namespace="http://schemas.acme.co.uk/suppliersdocument/1.0"/>     <xsd:import schemaLocation="Types\Supplier_V1.xsd"                              namespace="http://schemas.acme.co.uk/types/supplier/1.0"/>     <xsd:import  schemaLocation="GovTalk\bs7666-v2-0.xsd"                               namespace="http://www.govtalk.gov.uk/people/bs7666"/>     <xsd:import  schemaLocation="GovTalk\CommonSimpleTypes-v1-3.xsd"                             namespace="http://www.govtalk.gov.uk/core"/>     <xsd:import  schemaLocation="GovTalk\AddressTypes-v2-0.xsd"                              namespace="http://www.govtalk.gov.uk/people/AddressAndPersonalDetails"/> </xsd:schema> Modify the Input and Output message For example, this: <wsdl:message name="ProcessRequestResponse_SubmitRequestResponse_InputMessage">   <wsdl:part name="part" type="xsd:anyType"/> </wsdl:message> <wsdl:message name="ProcessRequestResponse_SubmitRequestResponse_OutputMessage">   <wsdl:part name="part" type="xsd:anyType"/> </wsdl:message> … would become something like: <wsdl:message name="ProcessRequestResponse_SubmitRequestResponse_InputMessage">   <wsdl:part name="part"                       element="ssc:SupplierSearchEvent"                         xmlns:ssc="http://schemas.acme.co.uk/suppliersearchcommand/1.0" /> </wsdl:message> <wsdl:message name="ProcessRequestResponse_SubmitRequestResponse_OutputMessage">   <wsdl:part name="part"                       element="sd:SuppliersDocument"                       xmlns:sd="http://schemas.acme.co.uk/suppliersdocument/1.0"/> </wsdl:message> This WSDL can now be added as a service reference in client solutions.

    Read the article

  • Mouse pointer strange problem...

    - by Paska
    Hi all, i have last ubuntu installed (10.10), but from an year and thousand updates, video drivers updates, an hundreds of tricks, the mouse pointer is showed like an UGLY square... These are the screenshots: First Second I have no idea what to do to solve this problem. Anyone of you have an idea to solve it? Edit: this problem was encountered from 8.10+! Edit 2, Video card specifications: paska@ubuntu:~$ hwinfo --gfxcard 35: PCI 100.0: 0300 VGA compatible controller (VGA) [Created at pci.318] UDI: /org/freedesktop/Hal/devices/pci_1106_3230 Unique ID: VCu0.QX54AGQKWeE Parent ID: vSkL.CP+qXDDqow8 SysFS ID: /devices/pci0000:00/0000:00:01.0/0000:01:00.0 SysFS BusID: 0000:01:00.0 Hardware Class: graphics card Model: "VIA K8M890CE/K8N890CE [Chrome 9]" Vendor: pci 0x1106 "VIA Technologies, Inc." Device: pci 0x3230 "K8M890CE/K8N890CE [Chrome 9]" SubVendor: pci 0x1043 "ASUSTeK Computer Inc." SubDevice: pci 0x81b5 Revision: 0x11 Memory Range: 0xd0000000-0xdfffffff (rw,prefetchable) Memory Range: 0xfa000000-0xfaffffff (rw,non-prefetchable) Memory Range: 0xfbcf0000-0xfbcfffff (ro,prefetchable,disabled) IRQ: 16 (10026 events) I/O Ports: 0x3c0-0x3df (rw) Module Alias: "pci:v00001106d00003230sv00001043sd000081B5bc03sc00i00" Driver Info #0: Driver Status: viafb is not active Driver Activation Cmd: "modprobe viafb" Config Status: cfg=new, avail=yes, need=no, active=unknown Attached to: #17 (PCI bridge) Primary display adapter: #35 paska@ubuntu:~$ thanks, A

    Read the article

  • Cannot get realtek8188ee to work in 14.04

    - by dang42
    I have a Toshiba Satellite C55-A5300 laptop. When I run lspci -nn it shows 02:00.0 Network controller [0280]: Realtek Semiconductor Co., Ltd. RTL8188EE Wireless Network Adapter [10ec:8179] (rev 01) It has always had the common problem others have asked about here (and many, many other places on the web) where it would connect, then drop the connection at random intervals. I tried every solution I could find, here & elsewhere, and they always caused errors after running "make" (more details below), but as I could still connect to networks I just dealt with it. I upgraded to 14.04 a few days ago and now it won't connect at all - I need help getting this to work. I originally followed the instructions posted by chili555 found here: Wireless not working on Toshiba Satellite C55-A5281, but I get the following errors when running "make": /home/dan/backports-3.11-rc3-1/net/wireless/sysfs.c:151:2: error: unknown field ‘dev_attrs’ specified in initializer .dev_attrs = ieee80211_dev_attrs, ^ /home/dan/backports-3.11-rc3-1/net/wireless/sysfs.c:151:2: warning: initialization from incompatible pointer type [enabled by default] /home/dan/backports-3.11-rc3-1/net/wireless/sysfs.c:151:2: warning: (near initialization for ‘ieee80211_class.suspend’) [enabled by default] make[6]: * [/home/dan/backports-3.11-rc3-1/net/wireless/sysfs.o] Error 1 make[5]: [/home/dan/backports-3.11-rc3-1/net/wireless] Error 2 make[4]: [module/home/dan/backports-3.11-rc3-1] Error 2 make[3]: [modules] Error 2 make2: [modules] Error 2 make1: * [modules] Error 2 make: * [default] Error 2 I have no clue how to diagnose the problem or how to proceed from here. I also don't know what information one might need from me in order to move forward. I'll be happy to share anything you'd like to know if it results in this thing (finally!) working properly. Thanks in advance for any / all help. ETA: I did see this post - Realtek 8188ee wireless driver SOLVED - and it looks like it is discussing the same problem I'm having, but I cannot for the life of me figure out what I had to add the testing repository to my /etc/apt/sources.list means, so I am still stuck.

    Read the article

  • Silverlight Cream for March 06, 2011 -- #1054

    - by Dave Campbell
    In this Back from the Summit Issue, I am overloaded with posts to choose from. Submittals go first, but I'll eventually catch up... hopefully by MIX :) : Ollie Riches(-2-), Colin Eberhardt, John Papa, Jeremy Likness, Martin Krüger, Joost van Schaik, Karl Shifflett, Michael Crump, Georgi Stoyanov, Yochay Kiriaty, Page Brooks, and Deborah Kurata. Above the Fold: Silverlight: "ClassifiedCabinet: A Quick Start" Georgi Stoyanov WP7: "Easy access to WMAppManifest.xml App properties like version and title" Joost van Schaik Multiple: "Flashcards.Show Version 2 for the Desktop, Browser, and Windows Phone" Yochay Kiriaty Shoutouts: Mohamed Mosallem delivered an online session at the Second Riyadh Online Community Summit: Silverlight 4.0 with SharePoint 2010 John-Daniel Trask posted about a release of a new set of tools released for WP7 development... there's a free trial, so definitely worth a look: Mindscape Phone Elements released! From SilverlightCream.com: WP7Contrib: Trickling data to a bound collection Ollie Riches submitted a couple links... first up is this on a way they found to decrease the load on a data template in WP7 to get under the 90 mb limit and then added their solution to the WP7Contrib lib. WP7Contrib: Why we use SilverlightSerializer instead of DataContractSerializer Ollie Riches' next submittal compares the performance of the SilverlightSerializer & DataContractSerializer on the WP7 platform. MVVM Charting – Binding Multiple Series to a Visiblox Chart Colin Eberhardt sent me this post where he describes binding multiple series to a chart with no code-behind... great long multi-phase tutorial all with source. Silverlight TV 64: Dive into 64bit Support, App Model and Security John Papa has Nick Kramer of the Silverlight team up for his latest Silverlight TV episode, discussing some cool new Silverlight stuff: 64-bit support, multiple windows, etc. Building a Windows Phone 7 Application with UltraLight.mvvm Jeremy Likness has a pre-summit tutorial up on his UltraLight.mvvm project, and how he would use it to build a WP7 app... great to meet you, Jeremy! How to: Storyboard only start with the conspicuousness of the application in the browser window Martin Krüger continues his Storyboard startup solutions with this one about what to do if the Silverlight app is small or simply an island on an html page. Easy access to WMAppManifest.xml App properties like version and title Joost van Schaik posted about the WP7 manifest file and how you can get access to that information at runtime... why you ask? How about version number or title? Be sure to read the helpful hints in the last paragraph too! Mole 2010 Released Karl Shifflett, Josh Smith, and others have released the latest version of Mole... well worth the money in my opinion, if only it worked for Silverlight! (not their fault) Changing the Default Windows Phone 7 Deployment Target In Visual Studio 2010 Michael Crump points out an annoyance with the 2011 WP7 tools update... VS2010 defaults to the device rather than the emulator... and he shows us how to get it pointed back to the emulator! ClassifiedCabinet: A Quick Start Georgi Stoyanov posted a QuickStart to a 'ClassifiedCabinet' control posted on CodePlex... check out the demo first, you'll want to read the article after that. He builds a simple project from scratch using the control. Flashcards.Show Version 2 for the Desktop, Browser, and Windows Phone Yochay Kiriaty has a post up about FlashCards.Show version 2 that he worked on with Arik Poznanski and has it now running on the desktop, browser, and WP7, plus you get the source... I've been wanting to write just such an app for WP7, so hey... this saves me some time! A Simple Focus Manager for Jounce Applications Page Brooks has a post up about Jeremy Likness' Jounce... how to set focus to a particular control when a view loads. Silverlight Charting: Formatting the Axis Deborah Kurata is continuing her charting series with this one on setting axis font color and putting the text at an angle... really dresses up the chart! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • AJAX Control Toolkit - Incompatibility with HTMLEditor and UpdatePanel

    - by Guilherme Cardoso
    Unfortunately HTMLEditor component of AJAX Control Toolkit is not compatible with the UpdatePanel. The problem is when we use accents with the Mozilla Firefox browser and HTMLEditor is inside an UpdatePanel. Letters that contain accents are left with an unknown character (so is stored in the database or even returned a PostBack). Can be tested using Mozilla Firefox on the site of the ASP.NET AJAX Control Toolkit.  Write a word with accents and go to "Submit Content": http://www.asp.net/AJAX/AjaxControlToolkit/Samples/HTMLEditor/HTMLEditor.aspx As an alternative to this problem there are multiple component Rich Text Editors, some using jQuery and others not. Queneeshas provided us a list of 10 components that can be viewed here: http://www.queness.com/post/212/10-jquery-and-non-jquery-javascript-rich-text-editors Hopefully next release of the AJAX Control Toolkit, this inconsistency and others (like the ModalPopup Extender that already referenced in my blog) are resolved once and for all. This is because there are more updated versions prior to that do not have these problems, and with the passing of time some parts were coming into conflict. If you know of any alternative or want to know at this problem, you can visit the topic I created the section of the AJAX Control Toolkit in ASP.NET forum: http://forums.asp.net/p/1548141/3848763.aspx

    Read the article

  • Hello World Pagelet

    - by astemkov
    Introduction The goal of this exercise is to give you a basic feel of how you can use Pagelet Producer to proxy a web page We will proxy a simple static Hello World web page, cut one section out of that page and present it as a pagelet that you can later insert on your own application page or to your portal page such as WebCenter Portal space or WebCenter Interaction community page. Hello World sample app This is the static web page we will work with: Let's assume the following: The Hello World web page is running on server http://appserver.company.com:1234/ The Hello World web page path is: http://appserver.company.com:1234/helloworld/ Initial Pagelet Producer setup Let's assume that the Pagelet Producer server is running on http://pageletserver.company.com:8889/pagelets/ First let's check that Pagelet Producer is up and running. In order to do that we just need to access the following URL: http://pageletserver.company.com:8889/pagelets/ And this is what should be returned: Now you can access Pagelet Producer administration screens using this URL: http://pageletserver.company.com:8889/pagelets/admin This is how the UI looks: Now if you connect to the internet via proxy server, you need to configure proxy in Pagelet Producer settings. In the Navigator pane: Jump To - Settings Click on "Proxy" Enter your proxy server configuration: Creating a resource First thing that you need to do is to create a resource for your web page. This will tell Pagelet Producer that all sub-paths of the web page should be proxied. It also will allow you to setup common rules of how your web page should be proxied and will serve as a container for your pagelets. In the Navigator pane: Jump To - Resources Click on any existing resource (ex. welcome_resource) Click on "Create selected type" toolbar button at the top of the Navigator pane Select "Web" in the "Select Producer Type" dialog box and click "OK" Now after the resource is created let's click on "General" sub-item a specify the following values Name = AppServer Source URL = http://appserver.company.com:1234/ Destination URL = /appserver/ Click on "Save" toolbar button at the top of the Navigator pane After the resource is created our web page becomes accessible by the URL: http://pageletserver.company.com:8889/pagelets/appserver/helloworld/ So in original web page address Source URL is replaced with Pagelet Producer URL (http://pageletserver.company.com:8889/pagelets) + Destination URL Creating a pagelet Now let's create "Hello World" pagelet. Under the resource node activate Pagelets subnode Click on "Create selected type" toolbar button at the top of the Navigator pane Click on "General" sub-node of newly created pagelet and specify the following values Name = Hello_World Library = MyLib Library is used for logical grouping. The portals use the "Library" value to group pagelets in their respective UI's. For example, when adding pagelets to a WebCenter Portal space you would see the individual pagelets listed under the "Library" name. URL Suffix = helloworld/index.html this is where the Hello World page html is served from Click on "Save" toolbar button at the top of the Navigator pane The Library name can be anything you want, it doesn't have to match the resource name at all. It is used as a logical grouping of pagelets, and you can include pagelets from multiple resources into the same library or create a new library for each pagelet. After you save the pagelet you can access it here: http://pageletserver.company.com:8889/pagelets/inject/v2/pagelet/MyLib/Hello_World which is : http://pageletserver.company.com:8889/pagelets/inject/v2/pagelet/ + [Library] + [Name] Or to test the injection of a pagelet into iframe you can click on the pagelets "Documentation" sub-node and use "Access Pagelet using REST" URL: This is what we will see: Clipping The pagelet that we just created covers the whole web page, but we want just the "Hello World" segment of it. So let's clip it. Under the Hello_World pagelet node activate Clipper sub-node Click on "Create selected type" toolbar button at the top of the Navigator pane Specify a Name for newly created clipper. For example: "c1" Click on "Content" sub-node of the clipper Click on "Launch Clipper" button New browser window will open By moving a mouse pointer over the web page select the area you want to clip: Click left mouse button - the browser window will disappear and you will see that Clipping Path was automatically generated Now let's save and access the link from the "Documentation" page again Here's our pagelet nicely clipped and ready for being used on your Web Center Space

    Read the article

  • RAID5 over LVM on Ubuntu Server 12.04.3

    - by April Ethereal
    I'm trying to create a RAID5 software array using LVM. I use VirtualBox as I'm only learning how LVM works. So I've created 4 virtual SCSI drives and then did the following: pvcreate /dev/sd[b-e] vgcreate /dev/sd[b-e] raid5_vg lvcreate --type raid5 -i 3 -L 1G -n raid_lv raid5_vg However, I get an error after the last command: WARNING: Unrecognised segment type raid5 Using default stripesize 64.00 KiB Rounding size (256 extents) up to stripe boundary size (258 extents) Cannot update volume group raid5_vg with unknown segments in it! So it looks like raid5 is not a valid segment type. "lvm segtypes" also doesn't contain 'raid5' entry: root@ubuntu-lvm:~# lvm segtypes striped zero error free snapshot mirror So my question is - how could I create RAID5 logical volume using LVM only? It seems that it is possible, I saw a few references (not for Ubuntu, unfortunately) for RedHat and Gentoo systems. I don't want to use mdadm for now, until I find out that it is mandatory. Some info about my system is below: root@ubuntu-lvm:~# uname -a Linux ubuntu-lvm 3.8.0I use Ubuntu Server 12.04.3 (i686)-29-generic #42~precise1-Ubuntu SMP Wed Aug 14 15:31:16 UTC 2013 i686 i686 i386 GNU/Linux root@ubuntu-lvm:~# dpkg -l | grep lvm ii lvm2 2.02.66-4ubuntu7.3 The Linux Logical Volume Manager Thanks.

    Read the article

  • Davicom Semiconductor, Inc. 21x4x DEC-Tulip not detected by Wireshark but IP operational

    - by deepsix86
    Recently flipped to Ubuntu 11.10 on a Dell 4300 (Intel). Getting IP address and no issues (ping/surf) but Wireshark unable to detect eth0 interface. I see references in forums to blacklist tulip but looks like I am running dmfe. Not sure if the blacklist is required and where to go from here. Maybe Driver update? Got a little lost looking in that area. Some h/w details below (IP/MAC/HOSTNAME removed) Linux xxxxxx 3.0.0-17-generic #30-Ubuntu SMP Thu Mar 8 17:34:21 UTC 2012 i686 i686 i386 GNU/Linux network-admin (HOSTS TAB) does not list eth0, only loopback and bunch of IPv6 interfaces ifconfig eth0 Link encap:Ethernet HWaddr xxxxxxxx inet addr:192.168.x.xx Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: xxxxxxxxxxx 64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:36662 errors:0 dropped:1 overruns:0 frame:0 TX packets:24975 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:42115779 (42.1 MB) TX bytes:3056435 (3.0 MB) Interrupt:18 Base address:0xe800 lspci 02:09.0 Ethernet controller: Davicom Semiconductor, Inc. 21x4x DEC-Tulip compatible 10/100 Ethernet (rev 31) Subsystem: Device 4554:434e Flags: bus master, medium devsel, latency 64, IRQ 18 I/O ports at e800 [size=256] Memory at fe1ffc00 (32-bit, non-prefetchable) [size=256] Expansion ROM at fe200000 [disabled] [size=256K] Capabilities: [50] Power Management version 2 Kernel driver in use: dmfe Kernel modules: dmfe hwinfo --netcard 20: PCI 209.0: 0200 Ethernet controller [Created at pci.318] Unique ID: rBUF.0NgK5ZS9c0D Parent ID: 6NW+.siohrLUzzI4 SysFS ID: /devices/pci0000:00/0000:00:1e.0/0000:02:09.0 SysFS BusID: 0000:02:09.0 Hardware Class: network Model: "Davicom 21x4x DEC-Tulip compatible 10/100 Ethernet" Vendor: pci 0x1282 "Davicom Semiconductor, Inc." Device: pci 0x9102 "21x4x DEC-Tulip compatible 10/100 Ethernet" SubVendor: pci 0x4554 SubDevice: pci 0x434e Revision: 0x31 Driver: "dmfe" Driver Modules: "dmfe" Device File: eth0 I/O Ports: 0xe800-0xe8ff (rw) Memory Range: 0xfe1ffc00-0xfe1ffcff (rw,non-prefetchable) Memory Range: 0xfe200000-0xfe23ffff (ro,non-prefetchable,disabled) IRQ: 18 (61379 events) HW Address: 00:08:a1:01:35:70 Link detected: yes Module Alias: "pci:v00001282d00009102sv00004554sd0000434Ebc02sc00i00" Driver Info #0: Driver Status: dmfe is active Driver Activation Cmd: "modprobe dmfe" Config Status: cfg=new, avail=yes, need=no, active=unknown Attached to: #11 (PCI bridge)

    Read the article

  • How to fix Software Center crashes?

    - by shandna towns
    E: Type '<!DOCTYPE' is not known on line 1 in source list /etc/apt/sources.list.d/medibuntu.list E: The list of sources could not be read. shandanatowns@ubuntu:~$ software-center 2012-09-25 12:23:35,115 - softwarecenter.ui.gtk3.app - INFO - setting up proxy 'None' 2012-09-25 12:23:35,123 - softwarecenter.db.database - INFO - open() database: path=None use_axi=True use_agent=True (software-center:4524): Gtk-WARNING **: Theme parsing error: softwarecenter.css:34:20: Not using units is deprecated. Assuming 'px'. (software-center:4524): Gtk-WARNING **: Theme parsing error: softwarecenter.css:34:22: Not using units is deprecated. Assuming 'px'. (software-center:4524): Gtk-WARNING **: Theme parsing error: softwarecenter.css:56:20: Not using units is deprecated. Assuming 'px'. (software-center:4524): Gtk-WARNING **: Theme parsing error: softwarecenter.css:56:22: Not using units is deprecated. Assuming 'px'. (software-center:4524): Gtk-WARNING **: Theme parsing error: softwarecenter.css:60:20: Not using units is deprecated. Assuming 'px'. (software-center:4524): Gtk-WARNING **: Theme parsing error: softwarecenter.css:60:22: Not using units is deprecated. Assuming 'px'. 2012-09-25 12:23:35,472 - softwarecenter.backend.reviews - WARNING - Could not get usefulness from server, no username in config file 2012-09-25 12:23:35,477 - softwarecenter.fixme - WARNING - logs to the root logger: '('/usr/lib/python2.7/dist-packages/gi/importer.py', 51, 'find_module')' 2012-09-25 12:23:35,477 - root - ERROR - Could not find any typelib for LaunchpadIntegration 2012-09-25 12:23:35,605 - softwarecenter.ui.gtk3.app - INFO - show_available_packages: search_text is '', app is None. 2012-09-25 12:23:35,987 - softwarecenter.db.pkginfo_impl.aptcache - INFO - aptcache.open() Traceback (most recent call last): File "/usr/share/software-center/softwarecenter/db/pkginfo_impl/aptcache.py", line 257, in open self._cache = apt.Cache(progress) File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 102, in __init__ self.open(progress) File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 149, in open self._list.read_main_list() SystemError: E:Type '<!DOCTYPE' is not known on line 1 in source list /etc/apt/sources.list.d/medibuntu.list 2012-09-25 12:23:37,000 - softwarecenter.db.enquire - ERROR - _get_estimate_nr_apps_and_nr_pkgs failed Traceback (most recent call last): File "/usr/share/software-center/softwarecenter/db/enquire.py", line 115, in _get_estimate_nr_apps_and_nr_pkgs tmp_matches = enquire.get_mset(0, len(self.db), None, xfilter) File "/usr/share/software-center/softwarecenter/db/appfilter.py", line 89, in __call__ if (not pkgname in self.cache and File "/usr/share/software-center/softwarecenter/db/pkginfo_impl/aptcache.py", line 277, in __contains__ return self._cache.__contains__(k) AttributeError: 'NoneType' object has no attribute '__contains__' Traceback (most recent call last): File "/usr/bin/software-center", line 182, in <module> app.run(args) File "/usr/share/software-center/softwarecenter/ui/gtk3/app.py", line 1385, in run self.show_available_packages(args) File "/usr/share/software-center/softwarecenter/ui/gtk3/app.py", line 1323, in show_available_packages self.view_manager.set_active_view(ViewPages.AVAILABLE) File "/usr/share/software-center/softwarecenter/ui/gtk3/session/viewmanager.py", line 151, in set_active_view view_widget.init_view() File "/usr/share/software-center/softwarecenter/ui/gtk3/panes/availablepane.py", line 173, in init_view self.cache, self.db, self.icons, self.apps_filter) File "/usr/share/software-center/softwarecenter/ui/gtk3/views/lobbyview.py", line 81, in __init__ self.build() File "/usr/share/software-center/softwarecenter/ui/gtk3/views/lobbyview.py", line 322, in build self._build_homepage_view() File "/usr/share/software-center/softwarecenter/ui/gtk3/views/lobbyview.py", line 120, in _build_homepage_view self._append_whats_new() File "/usr/share/software-center/softwarecenter/ui/gtk3/views/lobbyview.py", line 251, in _append_whats_new whats_new_cat = self._update_whats_new_content() File "/usr/share/software-center/softwarecenter/ui/gtk3/views/lobbyview.py", line 236, in _update_whats_new_content docs = whats_new_cat.get_documents(self.db) File "/usr/share/software-center/softwarecenter/db/categories.py", line 132, in get_documents nonblocking_load=False) File "/usr/share/software-center/softwarecenter/db/enquire.py", line 317, in set_query self._blocking_perform_search() File "/usr/share/software-center/softwarecenter/db/enquire.py", line 212, in _blocking_perform_search matches = enquire.get_mset(0, self.limit, None, xfilter) File "/usr/share/software-center/softwarecenter/db/appfilter.py", line 89, in __call__ if (not pkgname in self.cache and File "/usr/share/software-center/softwarecenter/db/pkginfo_impl/aptcache.py", line 277, in __contains__ return self._cache.__contains__(k) AttributeError: 'NoneType' object has no attribute '__contains__'

    Read the article

  • Scid Chess Program Compiling issue

    - by lbochitt
    First of all I would like to point that I'm new to Ubuntu so sorry if what I am asking is ridiculous. I have downloaded Scid 4.4 chess program and I have tried to compile it as it was explained on its website: 1) Initialize git. 2) Create a folder where you want to download and (?) compile the source, then cast: git init on your command line. 3) You're now ready to download the sources recall Fulvio's spell: git clone git://scid.git.sourceforge.net/gitroot/scid/scid This should get you the latest Scid source. 4) You're now ready to compile Scid. In principle, all you need to do is: ./configure and then make 5) If you get stuck, you probably need to get developer versions of tcl/tk. This translates into issuing these three commands: sudo apt-get install tcl8.5-dev sudo apt-get install tk8.5-dev sudo apt-get install zlib1g -dev 6) You should now be ready to compile The problem is that when I run ./configure to start compiling the following message appears on Terminal: configure: Makefile configuration program for Scid Tcl/Tk version: 8.5 Your operating system is: Linux 3.8.0-19-generic Location of "tcl.h": /usr/include/tcl8.5 Location of "tk.h": /usr/include/tcl8.5 Location of Tcl 8.5 library: not found Location of Tk 8.5 library: not found Checking if your system already has zlib installed: yes. Using Makefile.conf. Not all settings could be determined! The default Makefile was written. You will need to edit it before you can compile Scid. What should I do? Has anybody faced this problem before? Thanks in advance I have run ls -l /usr/include/tcl8.5/tcl.h here's the result: -rw-r--r-- 1 root root 87291 abr 22 10:45 /usr/include/tcl8.5/tcl.h I have also tried what you suggested Could you run git reset --hard HEAD and git clean -d -f to clean up everything using Git? Then run ./configure again. Just a shot in the dark - I've seen some GNU automake stuff still listening to its "cached" version of the results or something. Still no solution. I don't know why it can't recognize the library though it is installed I opened configure to see where the program looked for the library. This is the code: # libraryPath: List of possible locations of Tcl/Tk library. set libraryPath { /usr/lib /usr/lib64 /usr/local/tcl/lib /usr/local/lib /usr/X11R6/lib /opt/tcltk/lib /usr/lib/x86_64-linux-gnu } lappend libraryPath "/usr/lib/tcl${tclv}" lappend libraryPath "/usr/lib/tk${tclv}" lappend libraryPath "/usr/lib/tcl${tclv_nodot}" lappend libraryPath "/usr/lib/tk${tclv_nodot}" # Try to add tcl_library and auto_path values to libraryPath, # in case the user has a non-standard Tcl/Tk library location: if {[info exists ::tcl_library]} { lappend headerPath \ [file join [file dirname [file dirname $::tcl_library]] include] lappend libraryPath [file dirname $::tcl_library] lappend libraryPath $::tcl_library } if {[info exists ::auto_path]} { foreach name $::auto_path { lappend libraryPath $name } } if {! [info exists var(TCL_INCLUDE)]} { puts -nonewline { Location of "tcl.h": } set opt(tcl_h) [findDir "tcl.h" $headerPath "TCL_VERSION.*$tclv"] if {$opt(tcl_h) == ""} { puts "not found" set success 0 set opt(tcl_h) "$::defaultVar(TCL_INCLUDE)" } else { puts $opt(tcl_h) } } set opt(tcl_lib) "" if {! [info exists var(TCL_LIBRARY)]} { puts -nonewline " Location of Tcl $tclv library: " set opt(tcl_lib) [findDir "libtcl${tclv}.*" $libraryPath] if {$opt(tcl_lib) == ""} { set opt(tcl_lib) [findDir "libtcl${tclv_nodot}.*" $libraryPath] if {$opt(tcl_lib) == ""} { puts "not found" set success 0 set opt(tcl_lib) "$opt(tcl_h)" set opt(tcl_lib_file) "tcl\$(TCL_VERSION)" } else { set opt(tcl_lib_file) "tcl${tclv_nodot}" puts $opt(tcl_lib) } } else { set opt(tcl_lib_file) "tcl\$(TCL_VERSION)" puts $opt(tcl_lib) } } if {! [info exists var(TCL_INCLUDE)]} { set var(TCL_INCLUDE) "-I$opt(tcl_h)" } if {! [info exists var(TCL_LIBRARY)]} { set var(TCL_LIBRARY) "-L$opt(tcl_lib) -l$opt(tcl_lib_file)" } return $success So I guess (And by guess I mean i have no idea how to code) I should write somewhere in here usr/lib/tcl8.5 and usr/lib/tk8.5, am I right?

    Read the article

  • Whats consuming HDD Space

    - by Umair Mustafa
    I have single partition of 92GB in which I installed Ubuntu 12.04. And for some Unknown reason a message pop ups saying that I only have 1GB of HDD space left. I ran command sudo du -hscx * on / and /home /home gave me this result 4.0K C:\nppdf32Log\debuglog.txt 0 convertedvideo.avi 176M Desktop 16K Documents 169M Downloads 4.0K examples.desktop 17M file.txt 4.0K Music 984K Pictures 4.0K Public 320K Red Hat 6.iso 2.5M syslog-ng_3.3.6.tar.gz 4.0K Templates 8.0K terminal.png 1.2M Thunderbird Attachments 698M ubuntu10.04LTS.iso 16K Ubuntu One 4.0K Untitled Folder 4.0K Videos 21G VirtualBox VMs 22G total And / gave me this result 81G home 0 initrd.img 0 initrd.img.old 833M lib 16K lost+found 68K media 4.0K mnt 260M opt du: cannot access `proc/8339/task/8339/fd/4': No such file or directory du: cannot access `proc/8339/task/8339/fdinfo/4': No such file or directory du: cannot access `proc/8339/fd/4': No such file or directory du: cannot access `proc/8339/fdinfo/4': No such file or directory 0 proc 640K root 908K run 8.6M sbin 4.0K selinux 4.0K srv 0 sys 148K tmp 3.3G usr 436M var 0 vmlinuz 0 vmlinuz.old 86G total If you look at the result returned by / it shows that /home is consuming 81GB but on the other hand /home returns only 22GB. I cant figure out whats consuming the HDD. I have not installed anything except Virtual Machines Perpetrator found using Disk Usage Analyzer

    Read the article

  • “psmouse.ko' not found” installing ALPS touchpad in a Lenovo Ideapad Flex 14

    - by user279806
    I was following this instructions with Ubuntu 14.04. https://github.com/he1per/psmouse-dkms-alpsv7 and psmouse serio1: alps: Unknown ALPS touchpad in a Lenovo Ideapad Flex 15 And after many tries I received the following error message: root@alisson-Lenovo-Ideapad-Flex14:/tmp/psmouse-dkms-alpsv7# ./install.sh ------ Building with dkms ------- Error! DKMS tree already contains: psmouse-dkms-alpsv7-1.0 You cannot add the same module/version combo more than once. Module psmouse-dkms-alpsv7/1.0 already built for kernel 3.13.0-24-generic/4 ------ Installing with dkms ------- Module psmouse-dkms-alpsv7/1.0 already installed on kernel 3.13.0-24-generic/x86_64 ???? Error: dkms install failed:\n '/usr/lib/modules/3.13.0-24-generic//updates/psmouse.ko' not found. root@alisson-Lenovo-Ideapad-Flex14:/tmp/psmouse-dkms-alpsv7# than, searching for "psmouse.ko": alisson@alisson-Lenovo-Ideapad-Flex14:~$ locate psmouse.ko /lib/modules/3.13.0-24-generic/kernel/drivers/input/mouse/psmouse.ko /lib/modules/3.13.0-24-generic/updates/dkms/psmouse.ko /var/lib/dkms/psmouse-dkms-alpsv7/1.0/3.13.0-24-generic/x86_64/module/psmouse.ko /var/lib/dkms/psmouse-dkms-alpsv7/1.0/build/src/.psmouse.ko.cmd /var/lib/dkms/psmouse-dkms-alpsv7/1.0/build/src/psmouse.ko alisson@alisson-Lenovo-Ideapad-Flex14:~$ What should I do?

    Read the article

< Previous Page | 447 448 449 450 451 452 453 454 455 456 457 458  | Next Page >