Search Results

Search found 26810 results on 1073 pages for 'fixed point'.

Page 957/1073 | < Previous Page | 953 954 955 956 957 958 959 960 961 962 963 964  | Next Page >

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • Welcome to the Oracle Retail International Blog

    - by sarah.taylor(at)oracle.com
    Welcome to the first post of the new Oracle Retail International Blog. Retail is an international business and today's successful retailers view themselves in the context of a global market. A niche fashion business in Tokyo will learn marketing strategies from the luxury brands of Milan, an independent grocer in Oslo will source the same global brands as a supermarket in Oklahoma, and every retailer in the world will measure their multi-channel operation against the international e-commerce giant Amazon.  Why? Because today's customer is a global customer with unparalleled expectations on choice, price and service. Today's consumers have access to more information on retail than ever before. Technology allows people to shop from their home, their office or from the phone in their pocket, wherever they are and at whatever time suits them. Customers are using the web to search for products and promotions. They are also using the web to develop their voice in commenting on products and services that have delighted or disappointed. In an information rich industry, this customer element creates a new world of data. The best retailers are developing eagle eyes for reading customer activity and turning it into profitable decisions. Ultimately, whether you choose to compete or shop on price, service, product innovation, excellent operations or all of the above - the international world of retail has become an inspiration for all - retailer and consumer alike.  Retail as an industry is growing and diversifying at a faster rate than ever before. Yet it is still the customer who picks the winners and the losers on the retail field. Economic circumstances transform the rules, but it is still the customer who dictates the game, the pace, the price, and the perception of the brand. Wise retailers never rest on their laurels. They are always shopping for ideas on how to improve and differentiate the offer at every touch point to meet the customer's needs better than anyone else and to gain each customer's loyalty at a time when loyalty can be cheap. With this blog, I hope that we might provide a hub for discussion around what unifies retail and how technology supports both the retailer and customer experience. Despite the competitive nature of this market, we hope that this will provide an opportunity to share experiences and lessons learnt with a view that knowledge can only help this industry to grow and develop. At Oracle we've been supporting retailers for many years. Many of us have worked within retail organisations all over the world, myself included. With this in mind, I don't feel it is too bold a statement to say that Oracle understands retail. We wouldn't be so heavily integrated in some of the biggest and most well-known names in retail if we didn't. With this blog, we intend to create a community of international retailers that can exchange ideas and experiences, debate collective challenges and drive a better understanding of this continually evolving industry. Events such as the World Retail Congress and NRF's Big Show bring enormous value to the retail industry providing platforms for discussion and learning but they happen once a year. We wanted to create a platform for discussion on a different level and that like retail, is always on. We hope not only to bring commitment to being not only the infrastructure that brings all of their systems together within a retail business, but an infrastructure that supports the industry internationally to grow and flourish through creating a platform for networking, discussion, creativity, vision and strategy. Please feel free to ask questions or comment using the comments functionality.  You might also want to visit our other Oracle Retail social media sites: Facebook - http://www.facebook.com/oracleretail YouTube - http://www.youtube.com/user/oracleretail Twitter - http://twitter.com/#!/oracleretailInsight-Driven Retailing Blog - http://blogs.oracle.com/retail/

    Read the article

  • Healthcare and Distributed Data Don't Mix

    - by [email protected]
    How many times have you heard the story?  Hard disk goes missing, USB thumb drive goes missing, laptop goes missing...Not a week goes by that we don't hear about our data going missing...  Healthcare data is a big one, but we hear about credit card data, pricing info, corporate intellectual property...  When I have spoken at Security and IT conferences part of my message is "Why do you give your users data to lose in the first place?"  I don't suggest they can't have access to it...in fact I work for the company that provides the premiere data security and desktop solutions that DO provide access.  Access isn't the issue.  'Keeping the data' is the issue.We are all human - we all make mistakes... I fault no one for having their car stolen or that they dropped a USB thumb drive. (well, except the thieves - I can certainly find some fault there)  Where I find fault is in policy (or lack thereof sometimes) that allows users to carry around private, and important, data with them.  Mr. Director of IT - It is your fault, not theirs.  Ms. CSO - Look in the mirror.It isn't like one can't find a network to access the data from.  You are on a network right now.  How many Wireless ones (wifi, mifi, cellular...) are there around you, right now?  Allowing employees to remove data from the confines of (wait for it... ) THE DATA CENTER is just plain indefensible when it isn't required.  The argument that the laptop had a password and the hard disk was encrypted is ridiculous.  An encrypted drive tells thieves that before they sell the stolen unit for $75, they should crack the encryption and ascertain what the REAL value of the laptop is... credit card info, Identity info, pricing lists, banking transactions... a veritable treasure trove of info people give away on an 'encrypted disk'.What started this latest rant on lack of data control was an article in Government Health IT that was forwarded to me by Denny Olson, an Oracle Principal Sales Consultant in Minnesota.  The full article is here, but the point was that a couple laptops went missing in a couple different cases, and.. well... no one knows where the data is, and yes - they were loaded with patient info.  What were you thinking?Obviously you can't steal data form a Sun Ray appliance... since it has no data, nor any storage to keep the data on, and Secure Global Desktop allows access from Macs, Linux and Windows client devices...  but in all cases, there is no keeping the data unless you explicitly allow for it in your policy.   Since you can get at the data securely from any network, why would you want to take personal responsibility for it?  Both Sun Rays and Secure Global Desktop are widely used in Healthcare... but clearly not widely enough.We need to do a better job of getting the message out -  Healthcare (or insert your business type here) and distributed data don't mix. Then add Hot Desking and 'follow me printing' and you have something that Clinicians (and CSOs) love.Thanks for putting up my blood pressure, Denny.

    Read the article

  • Finding the maximum value/date across columns

    - by AtulThakor
    While working on some code recently I discovered a neat little trick to find the maximum value across several columns….. So the starting point was finding the maximum date across several related tables and storing the maximum value against an aggregated record. Here's the sample setup code: USE TEMPDB IF OBJECT_ID('CUSTOMER') IS NOT NULL BEGIN DROP TABLE CUSTOMER END IF OBJECT_ID('ADDRESS') IS NOT NULL BEGIN DROP TABLE ADDRESS END IF OBJECT_ID('ORDERS') IS NOT NULL BEGIN DROP TABLE ORDERS END SELECT 1 AS CUSTOMERID, 'FREDDY KRUEGER' AS NAME, GETDATE() - 10 AS DATEUPDATED INTO CUSTOMER SELECT 100000 AS ADDRESSID, 1 AS CUSTOMERID, '1428 ELM STREET' AS ADDRESS, GETDATE() -5 AS DATEUPDATED INTO ADDRESS SELECT 123456 AS ORDERID, 1 AS CUSTOMERID, GETDATE() + 1 AS DATEUPDATED INTO ORDERS .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Now the code used a function to determine the maximum date, this performed poorly. After considering pivoting the data I opted for a case statement, this seemed reasonable until I discovered other areas which needed to determine the maximum date between 5 or more tables which didn't scale well. The final solution involved using the value clause within a sub query as followed. SELECT C.CUSTOMERID, A.ADDRESSID, (SELECT MAX(DT) FROM (Values(C.DATEUPDATED),(A.DATEUPDATED),(O.DATEUPDATED)) AS VALUE(DT)) FROM CUSTOMER C INNER JOIN ADDRESS A ON C.CUSTOMERID = A.CUSTOMERID INNER JOIN ORDERS O ON O.CUSTOMERID = C.CUSTOMERID .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } As you can see the solution scales well and can take advantage of many of the aggregate functions!

    Read the article

  • JD Edwards Apps in a Box - Update

    - by Hartmut Wiese
    Summary and clarification JD Edwards Apps in a box is a Partner offering to the customer. We as Oracle have a huge interest in getting a successful offering to the market and we help the Partner building their offering. We provide components like JD Edwards EnterpriseOne and the Hardware. The Business Partner adds the installation services and position this as a solution to the market for a single price. As you know JD Edwards EnterpriseOne can run on multiple hardware platforms. Linux/X-86 version As you all know we do have JD Edwards VM Templates available from Oracle for the X-86 architecture. Each Partner should or is already able to install JD Edwards EnterpriseOne using these images from our software delivery cloud. We built a master bill of material for a X3-2 Hardware configuration now. It has been uploaded on the Community Workspace now. This is a SUGGESTION and limited to 50 Users MAX. However I strongly recommend you to do a sizing as usual and verify the configuration for each opportunity individually. T4-1/X3-2 version Oracle is not providing similar images for the T4-1 SPARC / SOLARIS architecture. There is an Optimized Solution Team inside Oracle who has created an Optimized Solution for JD Edwards some time ago. They created a whitepaper which is still available to download. This whitepaper was used as a starting point however we decided to build a new version of it using the latest Software and Hardware available. This has now been finalized and we are happy to provide this to our partners. This image is more a service we provide for each partner which they can reuse and extend based on their individual offerings. It is not an official supported Oracle Product and cannot be used to deploy to customers immediately. You cannot resell “JDE in a box”. You can use these images to save time while building your own Go-to-Market offering. You might want to add functionality like Mobility. It is also not complete as also the Deployment Server needs to be configured individually at the customer site. We will create some documentation about: what this images contains (and what not)? what final installation activities needs to be provided by each VAD/Partner in this process?  I will send an email to the community once we are ready to share it. You find these assets than in the Community Workspace. The Business Model with Oracle Hardware For those who have not done any Hardware business with Oracle yet: Usually a HW reseller orders the hardware through a Value Add Distributors (VAD) and not from Oracle directly. Each Partner needs to have Hardware Resell rights to do so. The VAD is assembling the boxes according to the needs of each customer. It is easily possible for them to prepare the boxes with the images we/you provide. However the final configuration is something a reseller/implementer needs to do at the customer site. This process is not the same in the EMEA region. Sometimes a VAD are taking the order but they do not see the Hardware at all. In those cases a VAD cannot provide any help with the pre-loading of any images and the reseller/implementer needs to do that. In some countries we do not have VADs at all.

    Read the article

  • MySql Connector/NET 6.7.4 GA has been released

    - by fernando
    MySQL Connector/Net 6.7.4, a new version of the all-managed .NET driver for MySQL has been released.  This is the GA, is feature complete. It is recommended for production environments.  It is appropriate for use with MySQL server versions 5.0-5.7.  It is now available in source and binary form from http://dev.mysql.com/downloads/connector/net/#downloads and mirror sites (note that not all mirror sites may be up to date at this point-if you can't find this version on some mirror, please try again later or choose another download site.) The 6.7 version of MySQL Connector/Net brings the following new features: -  WinRT Connector. -  Load Balancing support. -  Entity Framework 5.0 support. -  Memcached support for Innodb Memcached plugin. -  This version also splits the product in two: from now on, starting version 6.7, Connector/NET will include only the former Connector/NET ADO.NET driver, Entity Framework and ASP.NET providers (Core libraries of MySql.Data, MySql.Data.Entity & MySql.Web). While all the former product Visual Studio integration (Design support, Intellisense, Debugger) are available as part of MySql Windows Installer under the name "MySql for Visual Studio".  WinRT Connector  ------------------------------------------- Now you can write MySql data access apps in Windows Runtime (aka Store Apps) using the familiar API of Connector/NET for .NET.  Load Balancing Support  -------------------------------------------  Now you can setup a Replication or Cluster configuration in the backend, and Connector/NET will balance the load of queries among all servers making up the backend topology.  Entity Framework 5.0  -------------------------------------------  Connector/NET is now compatible with EF 5, including special features of EF 5 like spatial types.  Memcached  -------------------------------------------  Just setup Innodb memcached plugin and use Connector/NET new APIs to establish a client to MySql 5.6 server's memcached daemon.  Bug fixes included in this release: - Fix for Entity Framework when inserts data having Identity columns (Oracle bug #16494585). - Fix for Connector/NET cannot read data from a MySql table using UTF-16/UTF-32 (MySql bug #69169, Oracle bug #16776818). - Fix for Malformed query in Entity Framework when eager loading due to multiple projections (MySql bug #67183, Oracle bug #16872852). - Fix for database objects with 'dbo' prefix when using automatic migrations in Entity Framework 5.0 (Oracle bug #16909439). - Fix for bug IIS application pool reset worker process causes website to crash (Oracle bug #16909237, Mysql Bug #67665). - Fix for bug Error in LINQ to Entities query when using Distinct().Count() (MySql Bug #68513, Oracle bug #16950146). - Fix for occasionally return no data when socket connection is slow, interrupted or delayed (MySql bug #69039, Oracle bug #16950212). - Fix for ConstraintException when filling a datatable (MySql bug #65065, Oracle bug #16952323). - Fix for Data Provider is not found after uninstalling Mysql for visual studio (Oracle bug #16973456). - Fix for nested sql generated for LINQ to Entities query with Take and Order by (MySql bug #65723, Oracle bug #16973939). The documentation is available at http://dev.mysql.com/doc/refman/5.7/en/connector-net.html  Enjoy and thanks for the support!  --  Fernando Gonzalez Sanchez | Software Engineer |  Oracle MySQL Windows Experience Team, Connector/NET  Guadalajara | Jalisco | Mexico 

    Read the article

  • Exposed: Fake Social Marketing

    - by Mike Stiles
    Brands and marketers who want to build their social popularity on a foundation of lies are starting to face more of an uphill climb. Fake social is starting to get exposed, and there are a lot of emperors getting caught without any clothes. Facebook is getting ready to do a purge of “Likes” on Pages that were a result of bots, fake accounts, and even real users who were duped or accidentally Liked a Page. Most of those accidental Likes occur on mobile, where it’s easy for large fingers to hit the wrong space. Depending on the degree to which your Page has been the subject of such activity, you may see your number of Likes go down. But don’t sweat it, that’s a good thing. The social world has turned the corner and assessed the value of a Like. And the verdict is that a Like is valuable as an opportunity to build a real relationship with a real customer. Its value pales immensely compared to a user who’s actually engaged with the brand. Those fake Likes aren’t doing you any good. Huge numbers may once have impressed, but it’s not fooling anybody anymore. Facebook’s selling point to marketers is the ability to use a brand’s fans to reach friends of those fans. Consequently, there has to be validity and legitimacy to a fan count. Speaking of mobile, Trademob recently reported 40% of clicks are essentially worthless, because 22% of them are accidental (again with the fat fingers), while 18% are trickery. Publishers will but huge banner ads next to tiny app buttons to increase the odds of an accident. Others even hide a banner behind another to score 2 clicks instead of 1. Pontiflex and Harris Interactive last year found 47% of users were more likely to click a mobile ad accidentally than deliberately. Beyond that, hijacked devices are out there manipulating click data. But to what end for a marketer? What’s the value of a click on something a user never even saw? What’s the value of a seen but accidentally clicked ad if there’s no resulting transaction? Back to fake Likes, followers and views; they’re definitely for sale on numerous sites, none of which I’ll promote. $5 can get you 1,000 Twitter followers. You can even get followers targeted by interests. One site was set up by an unemployed accountant out of his house in England. He gets them from a wholesaler in Brooklyn, who gets them from a 19-year-old supplier in India. The unemployed accountant is making $10,000 a day. That means a lot of brands, celebrities and organizations are playing the fake social game, apparently not coming to grips with the slim value of the numbers they’re buying. But now, in addition to having paid good money for non-ROI numbers, there’s the embarrassment factor. At least a couple of sites have popped up allowing anyone to see just how many fake and inactive followers you have. Britain’s Fake Follower Check and StatusPeople are the two getting the most attention. Enter any Twitter handle and the results are there for all to see. Fake isn’t good, period. “Inactive” could be real followers, but if they’re real, they’re just watching, not engaging. If someone runs a check on your Twitter handle and turns up fake followers, does that mean you’re suspect or have purchased followers? No. Anyone can follow anyone, so most accounts will have some fakes. Even account results like Barack Obama’s (70% fake according to StatusPeople) and Lady Gaga’s (71% fake) don’t mean these people knew about all those fakes or initiated them. Regardless, brands should realize they’re now being watched, and users are judging the legitimacy of their social channels. Use one of any number of tools available to assess and clean out fake Likes and followers so that your numbers are as genuine as possible. And obviously, skip the “buying popularity” route of social marketing strategy. It doesn’t work and it gets you busted…a losing combination.

    Read the article

  • Visual Studio Little Wonders: Quick Launch / Quick Access

    - by James Michael Hare
    Once again, in this series of posts I look at features of Visual Studio that may seem trivial, but can help improve your efficiency as a developer. The index of all my past little wonders posts can be found here. Well, my friends, this post will be a bit short because I’m in the middle of a bit of a move at the moment.  But, that said, I didn’t want to let the blog go completely silent this week, so I decided to add another Little Wonder to the list for the Visual Studio IDE. How often have you wanted to change an option or execute a command in Visual Studio, but can’t remember where the darn thing is in the menu, settings, etc.?  If so, Quick Launch in VS2012 (or Quick Access in VS2010 with the Productivity Power Tools extension) is just for you! Quick Launch / Quick Access – find a command or option quickly For those of you using Visual Studio 2012, Quick Launch is built right into the IDE at the top of the title bar, near the minimize, maximize, and close buttons: But do not despair if you are using Visual Studio 2010, you can get Quick Access from the Productivity Power Tools extension.  To do this, you can go to the extension manager: And then go to the gallery and search for Productivity Power Tools and install it.  If you don’t have VS2012 yet, then the Productivity Power Tools is the next best thing.  This extension updates VS2010 with features such as Quick Access, the Solution Navigator, searchable Add Reference Dialog, better tab wells, etc.  I highly recommend it! But back to the topic at hand!  In VS2012 Quick Launch is built into the IDE and can be accessed by clicking in the Quick Launch area of the title bar, or by pressing CTRL+Q.  If you have VS2010 with the PPT installed, though, it is called Quick Access and is accessible through View –> Quick Access: Regardless of which IDE you are using, the feature behaves mostly the same.  It allows you to search all of Visual Studio’s commands and options for a particular topic.  For example, let’s say you want to change from tabs to tabs expanded to spaces, but don’t remember where that option is buried.  You can bring up Quick Launch / Quick Access and type in “tabs”: And it brings up a list of all options on tabs, you can then choose the one appropriate to you and click on it and it will take you right there! A lot easier than diving through the options tree to find what you are looking for!  It also works on menu commands, for example if you can’t remember how to open the Output window: It shows you the menu items that will get you to the Output window, and (if applicable) the keyboard shortcuts.  Again, clicking on one of these will perform the action for you as well. There are also some tasks you can perform directly from Quick Launch / Quick Access.  For example, perhaps you are one of those people who like to have the line numbers in your editor (I do), so let’s bring up Quick Launch / Quick Access and type “line numbers”: And let’s select Turn Line Numbers On, and now our editor looks like: And Voila!  We have line numbers in VS2010.  You can do this in VS2012 too, but it takes you to the option settings instead of directly turning them off and on.  There are bound to be differences between the way the two editors organize settings and commands, but you get the point. So, as you can see, the Quick Launch / Quick Access feature in Visual Studio makes it easy to jump right to the options, commands, or tasks you are interested in without all the digging. Summary An IDE as powerful as Visual Studio has so many options and commands that it can be confusing to remember how to find and invoke them.  Quick Launch (Quick Access in VS2010 with Productivity Power Tools extension) is a quick and handy way to jump to any of these options, commands, or tasks quickly without having to remember in what menu or screen they are buried!  Technorati Tags: C#,CSharp,.NET,Little Wonders,Visual Studio,Quick Access,Quick Launch

    Read the article

  • Give a session on C++ AMP – here is how

    - by Daniel Moth
    Ever since presenting on C++ AMP at the AMD Fusion conference in June, then the Gamefest conference in August, and the BUILD conference in September, I've had numerous requests about my material from folks that want to re-deliver the same session. The C++ AMP session I put together has evolved over the 3 presentations to its final form that I used at BUILD, so that is the one I recommend you base yours on. Please get the slides and the recording from channel9 (I'll refer to slide numbers below). This is how I've been presenting the C++ AMP session: Context (slide 3, 04:18-08:18) Start with a demo, on my dual-GPU machine. I've been using the N-Body sample (for VS 11 Developer Preview). (slide 4) Use an nvidia slide that has additional examples of performance improvements that customers enjoy with heterogeneous computing. (slide 5) Talk a bit about the differences today between CPU and GPU hardware, leading to the fact that these will continue to co-exist and that GPUs are great for data parallel algorithms, but not much else today. One is a jack of all trades and the other is a number cruncher. (slide 6) Use the APU example from amd, as one indication that the hardware space is still in motion, emphasizing that the C++ AMP solution is a data parallel API, not a GPU API. It has a future proof design for hardware we have yet to see. (slide 7) Provide more meta-data, as blogged about when I first introduced C++ AMP. Code (slide 9-11) Introduce C++ AMP coding with a simplistic array-addition algorithm – the slides speak for themselves. (slide 12-13) index<N>, extent<N>, and grid<N>. (Slide 14-16) array<T,N>, array_view<T,N> and comparison between them. (Slide 17) parallel_for_each. (slide 18, 21) restrict. (slide 19-20) actual restrictions of restrict(direct3d) – the slides speak for themselves. (slide 22) bring it altogether with a matrix multiplication example. (slide 23-24) accelerator, and accelerator_view. (slide 26-29) Introduce tiling incl. tiled matrix multiplication [tiling probably deserves a whole session instead of 6 minutes!]. IDE (slide 34,37) Briefly touch on the concurrency visualizer. It supports GPU profiling, but enhancements specific to C++ AMP we hope will come at the Beta timeframe, which is when I'll be spending more time talking about it. (slide 35-36, 51:54-59:16) Demonstrate the GPU debugging experience in VS 11. Summary (slide 39) Re-iterate some of the points of slide 7, and add the point that the C++ AMP spec will be open for other compiler vendors to implement, even on other platforms (in fact, Microsoft is actively working on that). (slide 40) Links to content – see slide – including where all your questions should go: http://social.msdn.microsoft.com/Forums/en/parallelcppnative/threads.   "But I don't have time for a full blown session, I only need 2 (or just 1, or 3) C++ AMP slides to use in my session on related topic X" If all you want is a small number of slides, you can take some from the session above and customize them. But because I am so nice, I have created some slides for you, including talking points in the notes section. Download them here. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Automated SSRS deployment with the RS utility

    - by Stacy Vicknair
    If you’re familiar with SSRS and development you are probably aware of the SSRS web services. The RS utility is a tool that comes with SSRS that allows for scripts to be executed against against the SSRS web service without needing to create an application to consume the service. One of the better benefits of using this format rather than writing an application is that the script can be modified by others who might be involved in the creation and addition of scripts or management of the SSRS environment.   Reporting Services Scripter Jasper Smith from http://www.sqldbatips.com created Reporting Services Scripter to assist with the created of a batch process to deploy an entire SSRS environment. The helper scripts below were created through the modification of his generated scripts. Why not just use this tool? You certainly can. For me, the volume of scripts generated seems less maintainable than just using some common methods extracted from these scripts and creating a deployment in a single script file. I would, however, recommend this as a product if you do not think that your environment will change drastically or if you do not need to deploy with a higher level of control over the deployment. If you just need to replicate, this tool works great. Executing with RS.exe Executing a script against rs.exe is fairly simple. The Script Half the battle is having a starting point. For the scripting I needed to do the below is the starter script. A few notes: This script assumes integrated security. This script assumes your reports have one data source each. Both of the above are just what made sense for my scenario and are definitely modifiable to accommodate your needs. If you are unsure how to change the scripts to your needs, I recommend Reporting Services Scripter to help you understand how the differences. The script has three main methods: CreateFolder, CreateDataSource and CreateReport. Scripting the server deployment is just a process of recreating all of the elements that you need through calls to these methods. If there are additional elements that you need to deploy that aren’t covered by these methods, again I suggest using Reporting Services Scripter to get the code you would need, convert it to a repeatable method and add it to this script! Public Sub Main() CreateFolder("/", "Data Sources") CreateFolder("/", "My Reports") CreateDataSource("/Data Sources", "myDataSource", _ "Data Source=server\instance;Initial Catalog=myDatabase") CreateReport("/My Reports", _ "MyReport", _ "C:\myreport.rdl", _ True, _ "/Data Sources", _ "myDataSource") End Sub   Public Sub CreateFolder(parent As String, name As String) Dim fullpath As String = GetFullPath(parent, name) Try RS.CreateFolder(name, parent, GetCommonProperties()) Console.WriteLine("Folder created: {0}", name) Catch e As SoapException If e.Detail.Item("ErrorCode").InnerText = "rsItemAlreadyExists" Then Console.WriteLine("Folder {0} already exists and cannot be overwritten", fullpath) Else Console.WriteLine("Error : " + e.Detail.Item("ErrorCode").InnerText + " (" + e.Detail.Item("Message").InnerText + ")") End If End Try End Sub   Public Sub CreateDataSource(parent As String, name As String, connectionString As String) Try RS.CreateDataSource(name, parent,False, GetDataSourceDefinition(connectionString), GetCommonProperties()) Console.WriteLine("DataSource {0} created successfully", name) Catch e As SoapException Console.WriteLine("Error : " + e.Detail.Item("ErrorCode").InnerText + " (" + e.Detail.Item("Message").InnerText + ")") End Try End Sub   Public Sub CreateReport(parent As String, name As String, location As String, overwrite As Boolean, dataSourcePath As String, dataSourceName As String) Dim reportContents As Byte() = Nothing Dim warnings As Warning() = Nothing Dim fullpath As String = GetFullPath(parent, name)   'Read RDL definition from disk Try Dim stream As FileStream = File.OpenRead(location) reportContents = New [Byte](stream.Length-1) {} stream.Read(reportContents, 0, CInt(stream.Length)) stream.Close()   warnings = RS.CreateReport(name, parent, overwrite, reportContents, GetCommonProperties())   If Not (warnings Is Nothing) Then Dim warning As Warning For Each warning In warnings Console.WriteLine(Warning.Message) Next warning Else Console.WriteLine("Report: {0} published successfully with no warnings", name) End If   'Set report DataSource references Dim dataSources(0) As DataSource   Dim dsr0 As New DataSourceReference dsr0.Reference = dataSourcePath Dim ds0 As New DataSource ds0.Item = CType(dsr0, DataSourceDefinitionOrReference) ds0.Name=dataSourceName dataSources(0) = ds0     RS.SetItemDataSources(fullpath, dataSources)   Console.Writeline("Report DataSources set successfully")       Catch e As IOException Console.WriteLine(e.Message) Catch e As SoapException Console.WriteLine("Error : " + e.Detail.Item("ErrorCode").InnerText + " (" + e.Detail.Item("Message").InnerText + ")") End Try End Sub     Public Function GetCommonProperties() As [Property]() 'Common CatalogItem properties Dim descprop As New [Property] descprop.Name = "Description" descprop.Value = "" Dim hiddenprop As New [Property] hiddenprop.Name = "Hidden" hiddenprop.Value = "False"   Dim props(1) As [Property] props(0) = descprop props(1) = hiddenprop Return props End Function   Public Function GetDataSourceDefinition(connectionString as String) Dim definition As New DataSourceDefinition definition.CredentialRetrieval = CredentialRetrievalEnum.Integrated definition.ConnectString = connectionString definition.Enabled = True definition.EnabledSpecified = True definition.Extension = "SQL" definition.ImpersonateUser = False definition.ImpersonateUserSpecified = True definition.Prompt = "Enter a user name and password to access the data source:" definition.WindowsCredentials = False definition.OriginalConnectStringExpressionBased = False definition.UseOriginalConnectString = False Return definition End Function   Private Function GetFullPath(parent As String, name As String) As String If parent = "/" Then Return parent + name Else Return parent + "/" + name End If End Function

    Read the article

  • SQL SERVER – Introduction to Big Data – Guest Post

    - by pinaldave
    BIG Data – such a big word – everybody talks about this now a days. It is the word in the database world. In one of the conversation I asked my friend Jasjeet Sigh the same question – what is Big Data? He instantly came up with a very effective write-up.  Jasjeet is working as a Technical Manager with Koenig Solutions. He leads the SQL domain, and holds rich IT industry experience. Talking about Koenig, it is a 19 year old IT training company that offers several certification choices. Some of its courses include SharePoint Training, Project Management certifications, Microsoft Trainings, Business Intelligence programs, Web Design and Development courses etc. Big Data, as the name suggests, is about data that is BIG in nature. The data is BIG in terms of size, and it is difficult to manage such enormous data with relational database management systems that are quite popular these days. Big Data is not just about being large in size, it is also about the variety of the data that differs in form or type. Some examples of Big Data are given below : Scientific data related to weather and atmosphere, Genetics etc Data collected by various medical procedures, such as Radiology, CT scan, MRI etc Data related to Global Positioning System Pictures and Videos Radio Frequency Data Data that may vary very rapidly like stock exchange information Apart from difficulties in managing and storing such data, it is difficult to query, analyze and visualize it. The characteristics of Big Data can be defined by four Vs: Volume: It simply means a large volume of data that may span Petabyte, Exabyte and so on. However it also depends organization to organization that what volume of data they consider as Big Data. Variety: As discussed above, Big Data is not limited to relational information or structured Data. It can also include unstructured data like pictures, videos, text, audio etc. Velocity:  Velocity means the speed by which data changes. The higher is the velocity, the more efficient should be the system to capture and analyze the data. Missing any important point may lead to wrong analysis or may even result in loss. Veracity: It has been recently added as the fourth V, and generally means truthfulness or adherence to the truth. In terms of Big Data, it is more of a challenge than a characteristic. It is difficult to ascertain the truth out of the enormous amount of data and the one that has high velocity. There are always chances of having un-precise and uncertain data. It is a challenging task to clean such data before it is analyzed. Big Data can be considered as the next big thing in the IT sector in terms of innovation and development. If appropriate technologies are developed to analyze and use the information, it can be the driving force for almost all industrial segments. These include Retail, Manufacturing, Service, Finance, Healthcare etc. This will help them to automate business decisions, increase productivity, and innovate and develop new products. Thanks Jasjeet Singh for an excellent write up.  Jasjeet Sign is working as a Technical Manager with Koenig Solutions. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Database, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Big Data

    Read the article

  • How to check if a cdrom is in the tray remotely (via ssh)?

    - by adempewolff
    I have a server running Ubuntu 10.04 (it's on the other side of the world and I haven't built up the wherewithal to upgrade it remotely yet) and I have been told that there is a CD in one of it's two CD drives. I want to rip an image of the cd and then download it to my local computer (I don't need help with either of these steps). However, I cannot seem to confirm whether or not there actually is a CD in the drive as I was told. It did not automatically mount anywhere (which I'm thinking might just be a result of it being a headless server not running X, nautilus, or any of the other nice user friendly things). There are two CD drives connected via SCSI: austin@austinvpn:/proc/scsi$ cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: ATA Model: WDC WD400EB-75CP Rev: 06.0 Type: Direct-Access ANSI SCSI revision: 05 Host: scsi1 Channel: 00 Id: 00 Lun: 00 Vendor: Lite-On Model: LTN486S 48x Max Rev: YDS6 Type: CD-ROM ANSI SCSI revision: 05 Host: scsi1 Channel: 00 Id: 01 Lun: 00 Vendor: SAMSUNG Model: CD-R/RW SW-248F Rev: R602 Type: CD-ROM ANSI SCSI revision: 05 However when I try mounting either of these devices (and every other device that could possibly be the cd-drive), it says no medium found: austin@austinvpn:/proc/scsi$ sudo mount -t iso9660 /dev/scd1 /cdrom mount: no medium found on /dev/sr1 austin@austinvpn:/proc/scsi$ sudo mount -t iso9660 /dev/scd0 /cdrom mount: no medium found on /dev/sr0 austin@austinvpn:/proc/scsi$ sudo mount -t iso9660 /dev/cdrom /cdrom mount: no medium found on /dev/sr1 austin@austinvpn:/proc/scsi$ sudo mount -t iso9660 /dev/cdrom1 /cdrom mount: no medium found on /dev/sr0 austin@austinvpn:/proc/scsi$ sudo mount -t iso9660 /dev/cdrw /cdrom mount: no medium found on /dev/sr1 Here are the contents of my /dev folder: austin@austinvpn:/proc/scsi$ ls /dev agpgart loop6 ram6 tty10 tty38 tty8 austinvpn loop7 ram7 tty11 tty39 tty9 block lp0 ram8 tty12 tty4 ttyS0 bsg mapper ram9 tty13 tty40 ttyS1 btrfs-control mcelog random tty14 tty41 ttyS2 bus mem rfkill tty15 tty42 ttyS3 cdrom net root tty16 tty43 urandom cdrom1 network_latency rtc tty17 tty44 usbmon0 cdrw network_throughput rtc0 tty18 tty45 usbmon1 char null scd0 tty19 tty46 usbmon2 console oldmem scd1 tty2 tty47 usbmon3 core parport0 sda tty20 tty48 usbmon4 cpu_dma_latency pktcdvd sda1 tty21 tty49 vcs disk port sda2 tty22 tty5 vcs1 dri ppp sda5 tty23 tty50 vcs2 ecryptfs psaux sg0 tty24 tty51 vcs3 fb0 ptmx sg1 tty25 tty52 vcs4 fd pts sg2 tty26 tty53 vcs5 full ram0 shm tty27 tty54 vcs6 fuse ram1 snapshot tty28 tty55 vcs7 hpet ram10 snd tty29 tty56 vcsa input ram11 sndstat tty3 tty57 vcsa1 kmsg ram12 sr0 tty30 tty58 vcsa2 log ram13 sr1 tty31 tty59 vcsa3 loop0 ram14 stderr tty32 tty6 vcsa4 loop1 ram15 stdin tty33 tty60 vcsa5 loop2 ram2 stdout tty34 tty61 vcsa6 loop3 ram3 tty tty35 tty62 vcsa7 loop4 ram4 tty0 tty36 tty63 vga_arbiter loop5 ram5 tty1 tty37 tty7 zero And here is my fstab file: austin@austinvpn:/proc/scsi$ cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 /dev/mapper/austinvpn-root / ext4 errors=remount-ro 0 1 # /boot was on /dev/sda1 during installation UUID=ed5520ae-c690-4ce6-881e-3598f299be06 /boot ext2 defaults 0 2 /dev/mapper/austinvpn-swap_1 none swap sw 0 0 Am I missing something/doing something wrong, or is there just no CD in the drive or is the drive possibly broken? Is there any nice command to list devices with mountable media? Thanks in advance for any help!

    Read the article

  • CodePlex Daily Summary for Friday, June 08, 2012

    CodePlex Daily Summary for Friday, June 08, 2012Popular ReleasesMy Google Workspace: Google Workspace 1.0.0: Useful Google Search workspace that includes web-browsing and applitions access Download it to see it in action, please take a look and let me know your feedback Thanks!Tweetz - Windows Twitter Client Gadget: tweetz 3.1.5.8: Fixes bug where Ctrl+Shift+S sends a tweet when only Ctrl+S should send the tweet.C++ AMP Conformance Test Suite: C++ AMP Conformance Test Suite 0.99.0: Initial release of the C++ AMP Conformance Test Suite.Audio Pitch & Shift: Audio Pitch And Shift 4.5.0: Added Instruments tab for modules Open folder content feature Some bug fixesLINQ to Twitter: LINQ to Twitter Beta v2.0.26: Supports .NET 3.5, .NET 4.0, Silverlight 4.0, Windows Phone 7.1, Client Profile, and Windows 8. 100% Twitter API coverage. Also available via NuGet! Follow @JoeMayo.Python Tools for Visual Studio: 1.5 Beta 1: We’re pleased to announce the release of Python Tools for Visual Studio 1.5 Beta. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including: • Supports CPython, IronPython, Jython and PyPy • Python editor with advanced member, signature intellisense and refactoring • Code navigation: “Find all refs”, goto definition, and object browser • Local and remote debugging •...Circuit Diagram: Circuit Diagram 2.0 Beta 1: New in this release: Automatically flip components when placing Delete components using keyboard delete key Resize document Document properties window Print document Recent files list Confirm when exiting with unsaved changes Thumbnail previews in Windows Explorer for CDDX files Show shortcut keys in toolbox Highlight selected item in toolbox Zoom using mouse scroll wheel while holding down ctrl key Plugin support for: Custom export formats Custom import formats Open...Umbraco CMS: Umbraco CMS 5.2 Beta: The future of Umbracov5 represents the future architecture of Umbraco, so please be aware that while it's technically superior to v4 it's not yet on a par feature or performance-wise. What's new? For full details see our http://progress.umbraco.org task tracking page showing all items complete for 5.2. In a nutshellPackage Builder Starter Kits Dynamic Extension Methods Querying / IsHelpers Friendly alt template URLs Localization Various bug fixes / performance enhancements Gett...JayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.0.5: JayData is a unified data access library for JavaScript developers to query and update data from different sources like WebSQL, IndexedDB, OData, Facebook or YQL. See it in action in this 6 minutes video New features in JayData 1.0.5http://jaydata.org/blog/jaydata-1.0.5-is-here-with-authentication-support-and-more http://jaydata.org/blog/release-notes Sencha Touch 2 module (read-only)This module can be used to bind data retrieved by JayData to Sencha Touch 2 generated user interface. (exam...32feet.NET: 3.5: This version changes the 32feet.NET library (both desktop and NETCF) to use .NET Framework version 3.5. Previously we compiled for .NET v2.0. There are no code changes from our version 3.4. See the 3.4 release for more information. Changes due to compiling for .NET 3.5Applications should be changed to use NET/NETCF v3.5. Removal of class InTheHand.Net.Bluetooth.AsyncCompletedEventArgs, which we provided on NETCF. We now just use the standard .NET System.ComponentModel.AsyncCompletedEvent...DotNetNuke® Links: 06.02.01: Added new DNN 6.2.0 beta social feature "friends" BugfixesApplication Architecture Guidelines: Application Architecture Guidelines 3.0.7: 3.0.7Jolt Environment: Jolt v2 Stable: Many new features. Follow development here for more information: http://www.rune-server.org/runescape-development/rs-503-client-server/projects/298763-jolt-environment-v2.html Setup instructions in downloadSharePoint Euro 2012 - UEFA European Football Predictor: havivi.euro2012.wsp (1.5): New fetures:Multilingual Support Max users property in Standings Web Part Games time zone change (UTC +1) bug fix - Version 1.4 locking problem http://euro2012.codeplex.com/discussions/358262 bug fix - Field Title not found (v.1.3) German SP http://euro2012.codeplex.com/discussions/358189#post844228 Bug fix - Access is denied.for users with contribute rights Bug fix - Installing on non-English version of SharePoint Bug fix - Title Rules Installing SharePoint Euro 2012 PredictorSharePoint E...xNet: xNet 2.1.1: Release xNet 2.1.1Command Line Parser Library: 1.9.2.4 stable: This is the first stable of 1.9.* branch. Added tests for HelpText::AutoBuild. Fixed minor formatting error in HelpText::DefaultParsingErrorsHandler.myManga: myManga v1.0.0.4: ChangeLogUpdating from Previous Version: Extract contents of Release - myManga v1.0.0.4.zip to previous version's folder. Replaces: myManga.exe BakaBox.dll CoreMangaClasses.dll Manga.dll Plugins/MangaReader.manga.dll Plugins/MangaFox.manga.dll Plugins/MangaHere.manga.dll Plugins/MangaPanda.manga.dllMVVM Light Toolkit: V4RC (binaries only) including Windows 8 RP: This package contains all the latest DLLs for MVVM Light V4 RC. It includes the DLLs for Windows 8 Release Preview. An updated Nuget package is also available at http://nuget.org/packages/MvvmLightLibsPreviewExtAspNet: ExtAspNet v3.1.7: +2012-06-03 v3.1.7 -?????????BUG,??????RadioButtonList?,AJAX????????BUG(swtseaman、????)。 +?Grid?BoundField、HyperLinkField、LinkButtonField、WindowField??HtmlEncode?HtmlEncodeFormatString(TiDi)。 -HtmlEncode?HtmlEncodeFormatString??????true,??????HTML????????。 -??????Asp.Net??GridView?BoundField?????????。 -http://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.boundfield.htmlencode -?Grid?HyperLinkField、WindowField??UrlEncode??,????URL??(???true)。 -?????????????,?????????????...LiveChat Starter Kit: LCSK v1.5.2: New features: Visitor location (City - Country) from geo-location Pass configuration via javascript for the chat box New visitor identification (no more using the IP address as visitor identification) To update from 1.5.1 Run the /src/1.5.2-sql-updates.txt SQL script to update your database tables. If you have it installed via NuGet, simply update your package and the file will be included so you can run the update script. New installation The easiest way to add LCSK to your app is by...New ProjectsAFSAspnetHandyAppTry: ddddddBlackrain Engine: A real-time rendering, game/cinematic engine.codeplexbsso: codeplexbsso?????codesmith template for DbEntry.Net: codesmith template for DbEntry.NetDevTeamsUnse: Proyecto de software para crear un sistema de gestion para una bibliotecaErrordite Client: Client libraries for Errordite Errordite is a simple plug-in for any .net application that will log details of exceptions generated by your application. Errordite will group together errors that are the same, either automatically or with rules you define. You then decide how to progress with the errors. You can investigate them, request to ignore them or put them on hold while you have more pressing concerns. For further information and to sign up for the beta, please visit http://ww...Expense Tracker V2: Asp.net site to manage expenses on a daily basis.Experiment N-Layer Project: The goal of this project to build a deal application using microsoft nlayer sample project. Goal is to build a complete application with proper documentation. More to add...GoMusicNow Downloader: This is a simple application created to make things a little easier when downloading a whole album worth of mp3s from the gomusicnow.com web site. To use, simply add your gomusicnow username and password, choose a local folder to download the mp3s to, then paste the url of the gomusicnow 'links' page, for the album you have purchased. Once you hit start, the files will be downloaded sequentially. This application is my first windows application, and first time using WPF. Therefore, I ca...Help A Lot(HAL): Dll to help in the developementHospital: The hospital project is built for my dad's hospital in India. The version one's features to be implemented are: 1. Add or edit patient (Deadline: 6/10/2012) 2. Add or edit medical history of a patient 3. Add or edit patient visit 4. Search existing patients 5. Print discharge reportinControl: Basic Inv ControlLongNile_Projects: Mobile devices projectsNucleo Mobile Detection: This project wraps around Wireless Universal Resource FiLe (WURFL) and provides added server-side mobile detection features for ASP.NET web forms and ASP.NET MVC. It provides controls/helpers and extension methods that make device detection support much easier. It also provides some mobile simulation capabilities for testing purposes.Physical Quantities: This library represents all physical quantities, units of measures, unit systems and unit conversions, as stated in different sources, but mainly on Wikipedia.Proyecto cuarto a: este trabajo es de prueba en codeplex 4to Apython otp: an hmac-based one-time password algorithmRadiation IM: This project aims to create a secure IRC like client (in the main ideas). But it brings a lot of new ideas and makes it a quite good place to discuss and share things. In security.RULI Chain Code Image Generator: RULICCimageGeneratorSharePoint 2010 - Selected items export to excel: Out of the box export to excel feature allows to export the complete view, and not the selected items. In order to achieve the functionality of exporting only selected items to excel.social media solidario: Se trata de desarrollar una aplicación web para escritorio y móvil que ofrezca la posibilidad de que expertos en social media puedan compartir su conocimiento en este área con las ONG , se trata de que respondan las preguntas que las ONG formulen y que todo ese conocimiento quede clasificado y a disposición de todosStarfish: Planning and organisation app for Pilates instructors.testddgit060720121: cxvtestddgit060720122: gftestddhg060720121: ,ktestddtfs060720122: cxvtesttom06072012git01: dsadstesttom06072012hg01: fdstitle: lo mejorWii Game Maker: Wii Game Maker is a program made in VB.NET that was made to allow people with little to no programming experience make their Wii ideas into reality.WoJiudeProject: WoJiudeProjectWpf localization without compile: Localization without recompiling, use it to localize your app. (Including xaml support).zzz: zzz

    Read the article

  • Best way to Draw a cube for 3D Picking on a specific face

    - by Kenneth Bray
    Currently I am drawing a cube for a game that I am making and the cube draw method is below. My question is, what is the best way to draw a cube and to be able to easily find the face that the cursor is over? My draw method works just fine, but I am getting ready to start to add picking (this will be used to mold the cubes into other shaps), and would like to know the best way to find a face of the cube. public void Draw() { // center point posX, posY, posZ float radius = size / 2; //top glPushMatrix(); glBegin(GL_QUADS); { glColor3f(1.0f,0.0f,0.0f); // red glVertex3f(posX + radius, posY + radius, posZ - radius); glVertex3f(posX - radius, posY + radius, posZ - radius); glVertex3f(posX - radius, posY + radius, posZ + radius); glVertex3f(posX + radius, posY + radius, posZ + radius); } glEnd(); glPopMatrix(); //bottom glPushMatrix(); glBegin(GL_QUADS); { glColor3f(1.0f,1.0f,0.0f); // ?? color glVertex3f(posX + radius, posY - radius, posZ + radius); glVertex3f(posX - radius, posY - radius, posZ + radius); glVertex3f(posX - radius, posY - radius, posZ - radius); glVertex3f(posX + radius, posY - radius, posZ - radius); } glEnd(); glPopMatrix(); //right side glPushMatrix(); glBegin(GL_QUADS); { glColor3f(1.0f,0.0f,1.0f); // ?? color glVertex3f(posX + radius, posY + radius, posZ + radius); glVertex3f(posX + radius, posY - radius, posZ + radius); glVertex3f(posX + radius, posY - radius, posZ - radius); glVertex3f(posX + radius, posY + radius, posZ - radius); } glEnd(); glPopMatrix(); //left side glPushMatrix(); glBegin(GL_QUADS); { glColor3f(0.0f,1.0f,1.0f); // ?? color glVertex3f(posX - radius, posY + radius, posZ - radius); glVertex3f(posX - radius, posY - radius, posZ - radius); glVertex3f(posX - radius, posY - radius, posZ + radius); glVertex3f(posX - radius, posY + radius, posZ + radius); } glEnd(); glPopMatrix(); //front side glPushMatrix(); glBegin(GL_QUADS); { glColor3f(0.0f,0.0f,1.0f); // blue glVertex3f(posX + radius, posY + radius, posZ + radius); glVertex3f(posX - radius, posY + radius, posZ + radius); glVertex3f(posX - radius, posY - radius, posZ + radius); glVertex3f(posX + radius, posY - radius, posZ + radius); } glEnd(); glPopMatrix(); //back side glPushMatrix(); glBegin(GL_QUADS); { glColor3f(0.0f,1.0f,0.0f); // green glVertex3f(posX + radius, posY - radius, posZ - radius); glVertex3f(posX - radius, posY - radius, posZ - radius); glVertex3f(posX - radius, posY + radius, posZ - radius); glVertex3f(posX + radius, posY + radius, posZ - radius); } glEnd(); glPopMatrix(); }

    Read the article

  • How to detect UTF-8-based encoded strings [closed]

    - by Diego Sendra
    A customer of asked us to build him a multi-language based support VB6 scraper, for which we had the need to detect UTF-8 based encoded strings to decode it later for proper displaying in application UI. It's necessary to point out that this need arises based on VB6 limitations to natively support UTF-8 in its controls, contrary to what it happens in .NET where you can tell a control that it should expect UTF-8 encoding. VB6 natively supports ISO 8859-1 and/or Windows-1252 encodings only, for which textboxes, dropdowns, listview controls, others can't be defined to natively support/expect UTF-8 as you can do in .NET considering what we just explained; so we would see weird symbols such as é, è among others, making it a whole mess at the time of displaying. So, next function contains whole UTF-8 encoded punctuation marks and symbols from languages like Spanish, Italian, German, Portuguese, French and others, based on an excellent UTF-8 based list we got from this link - Ref. http://home.telfort.nl/~t876506/utf8tbl.html Basically, the function compares if each and one of the listed UTF-8 encoded sentences, separated by | (pipe) are found in our passed string making a substring search first. Whether it's not found, it makes an alternative ASCII value based search to get a match. Say, a string like "Societé" (Society in english) would return FALSE through calling isUTF8("Societé") while it would return TRUE when calling isUTF8("SocietÈ") since È is the UTF-8 encoded representation of é. Once you got it TRUE or FALSE, you can decode the string through DecodeUTF8() function for properly displaying it, a function we found somewhere else time ago and also included in this post. Function isUTF8(ByVal ptstr As String) Dim tUTFencoded As String Dim tUTFencodedaux Dim tUTFencodedASCII As String Dim ptstrASCII As String Dim iaux, iaux2 As Integer Dim ffound As Boolean ffound = False ptstrASCII = "" For iaux = 1 To Len(ptstr) ptstrASCII = ptstrASCII & Asc(Mid(ptstr, iaux, 1)) & "|" Next tUTFencoded = "Ä|Ã…|Ç|É|Ñ|Ö|ÃŒ|á|Ã|â|ä|ã|Ã¥|ç|é|è|ê|ë|í|ì|î|ï|ñ|ó|ò|ô|ö|õ|ú|ù|û|ü|â€|°|¢|£|§|•|¶|ß|®|©|â„¢|´|¨|â‰|Æ|Ø|∞|±|≤|≥|Â¥|µ|∂|∑|âˆ|Ï€|∫|ª|º|Ω|æ|ø|¿|¡|¬|√|Æ’|≈|∆|«|»|…|Â|À|Ã|Õ|Å’|Å“|–|—|“|â€|‘|’|÷|â—Š|ÿ|Ÿ|â„|€|‹|›|ï¬|fl|‡|·|‚|„|‰|Â|Ú|Ã|Ë|È|Ã|ÃŽ|Ã|ÃŒ|Ó|Ô||Ã’|Ú|Û|Ù|ı|ˆ|Ëœ|¯|˘|Ë™|Ëš|¸|Ë|Ë›|ˇ" & _ "Å|Å¡|¦|²|³|¹|¼|½|¾|Ã|×|Ã|Þ|ð|ý|þ" & _ "â‰|∞|≤|≥|∂|∑|âˆ|Ï€|∫|Ω|√|≈|∆|â—Š|â„|ï¬|fl||ı|˘|Ë™|Ëš|Ë|Ë›|ˇ" tUTFencodedaux = Split(tUTFencoded, "|") If UBound(tUTFencodedaux) > 0 Then iaux = 0 Do While Not ffound And Not iaux > UBound(tUTFencodedaux) If InStr(1, ptstr, tUTFencodedaux(iaux), vbTextCompare) > 0 Then ffound = True End If If Not ffound Then 'ASCII numeric search tUTFencodedASCII = "" For iaux2 = 1 To Len(tUTFencodedaux(iaux)) 'gets ASCII numeric sequence tUTFencodedASCII = tUTFencodedASCII & Asc(Mid(tUTFencodedaux(iaux), iaux2, 1)) & "|" Next 'tUTFencodedASCII = Left(tUTFencodedASCII, Len(tUTFencodedASCII) - 1) 'compares numeric sequences If InStr(1, ptstrASCII, tUTFencodedASCII) > 0 Then ffound = True End If End If iaux = iaux + 1 Loop End If isUTF8 = ffound End Function Function DecodeUTF8(s) Dim i Dim c Dim n s = s & " " i = 1 Do While i <= Len(s) c = Asc(Mid(s, i, 1)) If c And &H80 Then n = 1 Do While i + n < Len(s) If (Asc(Mid(s, i + n, 1)) And &HC0) <> &H80 Then Exit Do End If n = n + 1 Loop If n = 2 And ((c And &HE0) = &HC0) Then c = Asc(Mid(s, i + 1, 1)) + &H40 * (c And &H1) Else c = 191 End If s = Left(s, i - 1) + Chr(c) + Mid(s, i + n) End If i = i + 1 Loop DecodeUTF8 = s End Function

    Read the article

  • Encode two integers into colour values and compare them in a HLSL shader

    - by Ben Slinger
    I am writing a 2D point and click adventure game in Monogame, and I'd like to be able to create an image mask for every room which defines which parts of the background a character can walk behind, and at which Y value a character needs to be at for the background to be drawn above the character. I haven't done any shader work before but after doing some reading I thought the following solution should work: Create a mask for the room with different walk behind areas painted in a colour that defines the baseline Y value (Walk Behind Mask) Render all objects to a RenderTarget2D (Base Texture) Render all objects to a different RenderTarget2D, but changing every pixel of each object to a colour that defines its Y value (Position Mask) Pass these two textures plus the image mask into the shader, and for each pixel compare the colour of the image mask to the colour of the Position Mask to the Walk Behind Mask - if the Position Mask pixel is larger (thus lower on the screen and closer to the camera) than the Walk Behind Mask, draw the pixel from the Base Texture, otherwise draw a transparent pixel (allowing the background to show through). I've got it mostly working, but I'm having trouble packing and unpacking the Y values into colours and retrieving them correctly in the shader. Here are some code examples of how I'm doing it so far: (When drawing to the Position Mask RenderTarget2D) Color posColor = new Color(((int)Position.Y >> 16) & 255, ((int)Position.Y >> 8) & 255, (int)Position.Y & 255); So as far as I can tell, this should be taking the first 3 bytes of the position integer and encoding them into a 4 byte colour (ignoring the alpha as the 4th byte). This seems to work fine, as when my character is at Y = 600, the resulting Color from this is: {[Color: R=0, G=2, B=88, A=255, PackedValue=4283957760]}. I then have an area in my Walk Behind Mask that I only want the character to be displayed behind if his Y value is lower than 655, so I've painted it with R=0, G=2, B=143, A=255. Now, I think I have the shader OK as well, here's what I have: sampler BaseTexture : register(s0); sampler MaskTexture : register(s1); sampler PositionTexture : register(s2); float4 mask( float2 coords : TEXCOORD0 ) : COLOR0 { float4 color = tex2D(BaseTexture, coords); float4 maskColor = tex2D(MaskTexture, coords); float4 positionColor = tex2D(PositionTexture, coords); float maskCompare = (maskColor.r * pow(2,24)) + (maskColor.g * pow(2,16)) + (maskColor.b * pow(2,8)); float positionCompare = (positionColor.r * pow(2,24)) + (positionColor.g * pow(2,16)) + (positionColor.b * pow(2,8)); return positionCompare < maskCompare ? float4(0,0,0,0) : color; } technique Technique1 { pass NoEffect { PixelShader = compile ps_3_0 mask(); } } This isn't working, however - currently all characters are displayed behind the walk behind area, regardless of their Y value. I tried printing out some debug info by grabbing the pixel from both the Position Mask and the Walk Under Mask under the current mouse position, and it seems like maybe the colours aren't being rendered to the Position Mask correctly? When calculating the colour in that code above I'm getting R=0, G=2, B=88, A=255, but when I mouseover my character I get R=0, G=0, B=30, A=255. Any ideas what I'm doing wrong? It seems like maybe I'm losing some information when rendering to the RenderTarget2D, but I'm now knowledgeable enough to figure out what's happening. Also, I should probably ask, is this an efficient way to do this? Will there be a performance impact? Edit: Whoops, turns out there was a bug that I'd introduced myself, I was drawing out the Position Mask with the position Color, left over from some early testing I was doing. So this solution is working perfectly, though I'm still interested in whether this is an efficient solution performance wise.

    Read the article

  • How to use a list of values in Excel as filter in a query

    - by Luca Zavarella
    It often happens that a customer provides us with a list of items for which to extract certain information. Imagine, for example, that our clients wish to have the header information of the sales orders only for certain orders. Most likely he will give us a list of items in a column in Excel, or, less probably, a simple text file with the identification code:     As long as the given values ??are at best a dozen, it costs us nothing to copy and paste those values ??in our SSMS and place them in a WHERE clause, using the IN operator, making sure to include the quotes in the case of alphanumeric elements (the database sample is AdventureWorks2008R2): SELECT * FROM Sales.SalesOrderHeader AS SOH WHERE SOH.SalesOrderNumber IN ( 'SO43667' ,'SO43709' ,'SO43726' ,'SO43746' ,'SO43782' ,'SO43796') Clearly, the need to add commas and quotes becomes an hassle when dealing with hundreds of items (which of course has happened to us!). It’d be comfortable to do a simple copy and paste, leaving the items as they are pasted, and make sure the query works fine. We can have this commodity via a User Defined Function, that returns items in a table. Simply we’ll provide the function with an input string parameter containing the pasted items. I give you directly the T-SQL code, where comments are there to clarify what was written: CREATE FUNCTION [dbo].[SplitCRLFList] (@List VARCHAR(MAX)) RETURNS @ParsedList TABLE ( --< Set the item length as your needs Item VARCHAR(255) ) AS BEGIN DECLARE --< Set the item length as your needs @Item VARCHAR(255) ,@Pos BIGINT --< Trim TABs due to indentations SET @List = REPLACE(@List, CHAR(9), '') --< Trim leading and trailing spaces, then add a CR\LF at the end of the list SET @List = LTRIM(RTRIM(@List)) + CHAR(13) + CHAR(10) --< Set the position at the first CR/LF in the list SET @Pos = CHARINDEX(CHAR(13) + CHAR(10), @List, 1) --< If exist other chars other than CR/LFs in the list then... IF REPLACE(@List, CHAR(13) + CHAR(10), '') <> '' BEGIN --< Loop while CR/LFs are over (not found = CHARINDEX returns 0) WHILE @Pos > 0 BEGIN --< Get the heading list chars from the first char to the first CR/LF and trim spaces SET @Item = LTRIM(RTRIM(LEFT(@List, @Pos - 1))) --< If the so calulated item is not empty... IF @Item <> '' BEGIN --< ...insert it in the @ParsedList temporary table INSERT INTO @ParsedList (Item) VALUES (@Item) --(CAST(@Item AS int)) --< Use the appropriate conversion if needed END --< Remove the first item from the list... SET @List = RIGHT(@List, LEN(@List) - @Pos - 1) --< ...and set the position to the next CR/LF SET @Pos = CHARINDEX(CHAR(13) + CHAR(10), @List, 1) --< Repeat this block while the upon loop condition is verified END END RETURN END At this point, having created the UDF, our query is transformed trivially in: SELECT * FROM Sales.SalesOrderHeader AS SOH WHERE SOH.SalesOrderNumber IN ( SELECT Item FROM SplitCRLFList('SO43667 SO43709 SO43726 SO43746 SO43782 SO43796') AS SCL) Convenient, isn’t it? You can find the script DBA_SplitCRLFList.sql here. Bye!!

    Read the article

  • Using a parser to locate faulty code

    - by ryan.riverside
    Lately I've been working a lot in PHP and have run into an abnormally large number of parsing errors. I realize these are my own fault and a result of sloppy initial coding on my part, but it's getting to the point that I'm spending more time resolving tags than developing. In the interest of not slamming my productivity, are there any tricks to locating the problem in the code? What I'd really be looking for would be a line to put in the code which would output the entire faulty tag in the parsing error, or something similar. Purely for reference sake, my current error is Parse error: syntax error, unexpected '}' in /home/content/80/9480880/html/cache/tpl_prosilver_viewtopic_body.html.php on line 50 (which refers to this): </dd><dd><?php if ($_poll_option_val['POLL_OPTION_RESULT'] == 0) { echo ((isset($this->_rootref['L_NO_VOTES'])) ? $this->_rootref['L_NO_VOTES'] : ((isset($user->lang['NO_VOTES'])) ? $user->lang['NO_VOTES'] : '{ NO_VOTES }')); } else { echo $_poll_option_val['POLL_OPTION_PERCENT']; } ?></dd> </dl> <?php }} if ($this->_rootref['S_DISPLAY_RESULTS']) { ?> <dl> <dt>&nbsp;</dt> <dd class="resultbar"><?php echo ((isset($this->_rootref['L_TOTAL_VOTES'])) ? $this->_rootref['L_TOTAL_VOTES'] : ((isset($user->lang['TOTAL_VOTES'])) ? $user->lang['TOTAL_VOTES'] : '{ TOTAL_VOTES }')); ?> : <?php echo (isset($this->_rootref['TOTAL_VOTES'])) ? $this->_rootref['TOTAL_VOTES'] : ''; ?></dd> </dl> <?php } if ($this->_rootref['S_CAN_VOTE']) { ?> <dl style="border-top: none;"> <dt>&nbsp;</dt> <dd class="resultbar"><input type="submit" name="update" value="<?php echo ((isset($this->_rootref['L_SUBMIT_VOTE'])) ? $this->_rootref['L_SUBMIT_VOTE'] : ((isset($user->lang['SUBMIT_VOTE'])) ? $user->lang['SUBMIT_VOTE'] : '{ SUBMIT_VOTE }')); ?>" class="button1" /></dd> </dl> <?php } if (! $this->_rootref['S_DISPLAY_RESULTS']) { ?> <dl style="border-top: none;"> <dt>&nbsp;</dt> <dd class="resultbar"><a href="<?php echo (isset($this->_rootref['U_VIEW_RESULTS'])) ? $this->_rootref['U_VIEW_RESULTS'] : ''; ?>"><?php echo ((isset($this->_rootref['L_VIEW_RESULTS'])) ? $this->_rootref['L_VIEW_RESULTS'] : ((isset($user->lang['VIEW_RESULTS'])) ? $user->lang['VIEW_RESULTS'] : '{ VIEW_RESULTS }')); ?></a></dd> </dl> <?php } ?> </fieldset></div>

    Read the article

  • F1 Pit Pragmatics

    - by mikef
    "I hate computers. No, really, I hate them. I love the communications they facilitate, I love the conveniences they provide to my life. but I actually hate the computers themselves." - Scott Merrill, 'I hate computers: confessions of a Sysadmin' If Scott's goal was to polarize opinion and trigger raging arguments over the 'real reasons why computers suck', then he certainly succeeded. Impassioned vitriol sits side-by-side with rational debate. Yet Scott's fundamental point is absolutely on the money - Computers are a means to an end. The IT industry is finally starting to put weight behind the notion that good User Experience is an absolutely crucial goal, a cause championed by the likes of Microsoft's Bill Buxton, and which Apple's increasingly ubiquitous touch screen interface exemplifies. However, that doesn't change the fact that, occasionally, you just have to man up and deal with complex systems. In fact, sometimes you just need to sacrifice everything else in the name of performance. You'll find a perfect example of this Faustian bargain in Trevor Clarke's fascinating look into the (diabolical) IT infrastructure of modern F1 racing - high performance, high availability. high everything. To paraphrase, each car has up to 100 sensors, transmitting around 30Gb of data over the course of a race (70% in real-time). This data is then processed by no less than 3 servers (per car) so that the engineers in the pit have access to telemetry, strategy information, timing feeds, a connection back to the operations room in the team's home base - the list goes on. All of this while the servers are exposed "to carbon dust, oil, vibration, rain, heat, [and] variable power". Now, this is admittedly an extreme context where there's no real choice but to use complex systems where ease-of-use is, at best, a secondary concern. The flip-side is seen in small-scale personal computing such as that seen in Apple's iDevices, which are incredibly intuitive but limited in their scope. In terms of what kinds of systems they prefer to use, I suspect that most SysAdmins find themselves somewhere along this axis of Power vs. Usability, and which end of this axis you resonate with also hints at where you think the IT industry should focus its energy. Do you see yourself in the F1 pit, making split-second decisions, wrestling with information flows and reticent hardware to bend them to your will? If so, I imagine you feel that computers are subtle tools which need to be tuned and honed, using the advanced knowledge possessed only by responsible SysAdmins (If you have an iPhone, I suspect it's jail-broken). If the machines throw enigmatic errors, it's the price of flexibility and raw power. Alternatively, would you prefer to have your role more accessible, with users empowered by knowledge, spreading the load of managing IT environments? In that case, then you want hardware and software to have User Experience as their primary focus, and are of the "means to an end" school of thought (you're probably also fed up with users not listening to you when you try and help). At its heart, the dichotomy is between raw power (which might be difficult to use) and ease-of-use (which might have some limitations, but you can be up and running immediately). Of course, the ultimate goal is a fusion of flexibility, power and usability all in one system. It's achievable in specific software environments, and Red Gate considers it a target worth aiming for, but in other cases it's a goal right up there with cold fusion. I think it'll be a long time before we see it become ubiquitous. In the meantime, are you Power-Hungry or a Champion of Usability? Cheers, Michael Francis Simple Talk SysAdmin Editor

    Read the article

  • NetBeans PHP Community Council

    - by Tomas Mysik
    Hi all, today we would like to inform all of you that now you have a chance to improve NetBeans via NetBeans PHP Community Council. The author of this activity is Timur Poperecinii and he would like to tell you a few words about it. Hello passionate technical people, First of all let me introduce myself: my name is Timur, I’m a developer from Moldova (that little country between Romania and Ukraine), I develop mostly in .NET and JQuery, but I love to learn more, not being an expert I am familiar with Java (Struts2, Play), PHP (Symfony2), Ruby (Rails), Sencha Touch 2 and other technologies. I was “introduced” in PHP recently by a client of mine who requested to make the work specifically in PHP. Let me tell you a little story about my experience with open source and IDEs: when I was studying in university in 2007 I think, I did a simple little application in PHP and thought “Damn, if only there was a good IDE for PHP so I could relax and no having to remember all the function names”, then when I searched on internet pretty much everyone was using Vim or Emacs on Linux, but it had no autocomplete anyway, just syntax highlighting. I remember using some tool like Notepad++ I think. Nowadays everything changed, we have highlighting and autocomplete for about all standard things in PHP in many IDEs. I use NetBeans for PHP, and I really am happy with the experience I have there with standard PHP code, but for frameworks I still think there is lots of room for improvements. For example we have some Symfony 2 and Twig support. But I’d love to see more of that coming, for example I’m a big fan of file templates, where the main goal is to not waste time on writing over and over again something that can be generated, and it counts even more when you don’t have a lot of autocomplete. So what I thought, “Hey I know Java a little, and NetBeans has plugins, so may be it worth trying to do a file templates plugin”, and so I did, you can find details about my Unified Udevi Symfony2 Plugin for NetBeans 7.2 on my blog. It wasn’t hard, and it even was fun! Give back to open source Now think a little, NetBeans is an open source project and PHP support is just a part of it, so the resources are pretty limited in this area. But we as a community that uses this product, want to have the best possible experience with PHP and frameworks(!!!). So why don’t we GIVE BACK TO OPENSOURCE ? Imagine an IDE that can do all the things you wanted + it is free. Now how far is NetBeans from that point? I guess not so far – you might miss a little niche thing that you use on a daily basis, but then the question appears why don’t you make it happen on your own? NetBeans PHP Community Council What I proposed is to create a NetBeans PHP Community Council that will be formed of people willing to change something, willing to create plugins for their own needs and for the needs of the community, test the plugins created by them too, and basically evolve NetBeans in direction they want to reach. I already talked with the NetBeans PHP team. They are only happy to help this Council, with technical advises, opening some APIs we might need to have access to, and other things. One important thing to mention is that this Council is a Community project, so though we’ll have direct discussions with NetBeans PHP Dev team, NetBeans is not the leading force here, it is the community. You can see more details about the goals and structure I proposed at NetBeans PHP Community Council wiki page. We use this mail list: [email protected] for discussions and topics related to the Council. How can I join To join the NetBeans PHP Community Council please send an email to [email protected] with the subject of the mail starting with [Council New Member]. You can subscribe to this mail list here:http://netbeans.org/projects/php/lists. in your mail please indicate your location, age and experience both in Java and PHP. I need these data to assign you to a team. A response will be send to you with your next assignment and some people to contact. I really hope that you’ll make a step forward and try to make your everyday use of NetBeans even more fun.

    Read the article

  • ASP.NET MVC: Moving code from controller action to service layer

    - by DigiMortal
    I fixed one controller action in my application that doesn’t seemed good enough for me. It wasn’t big move I did but worth to show to beginners how nice code you can write when using correct layering in your application. As an example I use code from my posting ASP.NET MVC: How to implement invitation codes support. Problematic controller action Although my controller action works well I don’t like how it looks. It is too much for controller action in my opinion. [HttpPost] public ActionResult GetAccess(string accessCode) {     if(string.IsNullOrEmpty(accessCode.Trim()))     {         ModelState.AddModelError("accessCode", "Insert invitation code!");         return View();     }       Guid accessGuid;       try     {         accessGuid = Guid.Parse(accessCode);     }     catch     {         ModelState.AddModelError("accessCode", "Incorrect format of invitation code!");         return View();                    }       using(var ctx = new EventsEntities())     {         var user = ctx.GetNewUserByAccessCode(accessGuid);         if(user == null)         {             ModelState.AddModelError("accessCode", "Cannot find account with given invitation code!");             return View();         }           user.UserToken = User.Identity.GetUserToken();         ctx.SaveChanges();     }       Session["UserId"] = accessGuid;       return Redirect("~/admin"); } Looking at this code my first idea is that all this access code stuff must be located somewhere else. We have working functionality in wrong place and we should do something about it. Service layer I add layers to my application very carefully because I don’t like to use hand grenade to kill a fly. When I see real need for some layer and it doesn’t add too much complexity I will add new layer. Right now it is good time to add service layer to my small application. After that it is time to move code to service layer and inject service class to controller. public interface IUserService {     bool ClaimAccessCode(string accessCode, string userToken,                          out string errorMessage);       // Other methods of user service } I need this interface when writing unit tests because I need fake service that doesn’t communicate with database and other external sources. public class UserService : IUserService {     private readonly IDataContext _context;       public UserService(IDataContext context)     {         _context = context;     }       public bool ClaimAccessCode(string accessCode, string userToken, out string errorMessage)     {         if (string.IsNullOrEmpty(accessCode.Trim()))         {             errorMessage = "Insert invitation code!";             return false;         }           Guid accessGuid;         if (!Guid.TryParse(accessCode, out accessGuid))         {             errorMessage = "Incorrect format of invitation code!";             return false;         }           var user = _context.GetNewUserByAccessCode(accessGuid);         if (user == null)         {             errorMessage = "Cannot find account with given invitation code!";             return false;         }           user.UserToken = userToken;         _context.SaveChanges();           errorMessage = string.Empty;         return true;     } } Right now I used simple solution for errors and made access code claiming method to follow usual TrySomething() methods pattern. This way I can keep error messages and their retrieval away from controller and in controller I just mediate error message from service to view. Controller Now all the code is moved to service layer and we need also some modifications to controller code so it makes use of users service. I don’t show here DI/IoC details about how to give service instance to controller. GetAccess() action of controller looks like this right now. [HttpPost] public ActionResult GetAccess(string accessCode) {     var userToken = User.Identity.GetUserToken();     string errorMessage;       if (!_userService.ClaimAccessCode(accessCode, userToken,                                       out errorMessage))     {                       ModelState.AddModelError("accessCode", errorMessage);         return View();     }       Session["UserId"] = Guid.Parse(accessCode);     return Redirect("~/admin"); } It’s short and nice now and it deals with web site part of access code claiming. In the case of error user is shown access code claiming view with error message that ClaimAccessCode() method returns as output parameter. If everything goes fine then access code is reserved for current user and user is authenticated. Conclusion When controller action grows big you have to move code to layers it actually belongs. In this posting I showed you how I moved access code claiming functionality from controller action to user service class that belongs to service layer of my application. As the result I have controller action that coordinates the user interaction when going through access code claiming process. Controller communicates with service layer and gets information about how access code claiming succeeded.

    Read the article

  • TechEd Israel 2010 may only accept speakers from sponsors

    - by RoyOsherove
    A month or so ago, Microsoft Israel started sending out emails to its partners and registered event users to “Save the date!” – Micraoft Teched Israel is coming, and it’s going to be this november! “Great news” I thought to myself. I’d been to a couple of the MS teched events, as a speaker and as an attendee, and it was lovely and professionally done. Israel is an amazing place for technology and development and TechEd hosted some big names in the world of MS software. A couple of weeks ago, I was shocked to hear from a couple of people that Microsoft Israel plans to only accept non-MS teched speakers, only from sponsors of the event. That means that according to the amount that you have paid, you get to insert one or more of your own selected speakers as part of teched. I’ve spent the past couple of weeks trying to gather more evidence of this, and have gotten some input from within MS about this information. It looks like that is indeed the case, though no MS rep. was prepared to answer any email I had publicly. If they approach me now I’d be happy to print their response. What does this mean? If this is true, it means that Microsoft Israel is making a grave mistake – They are diluting the quality of the speakers for pure money factors. That means, that as a teched attendee, who paid good money, you might be sitting down to watch nothing more that a bunch of infomercials, or sub-standard speakers – since speakers are no longer selected on quality or interest in their topic. They are turning the conference from a learning event to a commercial driven event They are closing off the stage to the community of speakers who may not be associated with any organization  willing to be a sponsor They are losing speakers (such as myself) who will not want to be part of such an event. (yes – even if my company ends up sponsoring the event, I will not take part in it, Sorry Eli!) They are saying “F&$K you” to the community of MVPs who should be the people to be approached first about technical talks (my guess is many MVPs wouldn’t want to talk at an event driven that way anyway ) I do hope this ends up not being true, but it looks like it is. MS Israel had already done such a thing with the Developer Days event previouly held in Israel – only sponsors were allowed to insert speakers into the event. If this turns out to be true I would urge the MS community in Israel to NOT TAKE PART AT THIS EVENT in any form (attendee, speaker, sponsor or otherwise). by taking part, you will be telling MS Israel it’s OK to piss all over the community that they are quietly suffocating anyway. The MVP case MS Israel has managed to screw the MVP program as well. MS MVPs (I’m one) have had a tough time here in Israel the past couple of years. ever since yosi taguri left the blue badge ranks, there was not real community leader left. Whoever runs things right now has their eyes and minds set elsewhere, with the software MVP community far from mind and heart. No special MVP events (except a couple of small ones this year). No real MVP leadership happens here, with the MVP MEA lead (Ruari) being on a remote line, is not really what’s needed. “MVP? What’s that?” I’m sure many MS Israel employees would say. Exactly my point. Last word I’ve been disappointed by the MS machine for a while now, but their slowness to realize what real community means in the past couple of years really turns me off. Maybe it’s time to move on. Maybe I shouldn’t be chasing people at MS Israel begging for a room to host the Agile Israel user group. Maybe it’s time to say a big bye bye and start looking at a life a bit more disconnected.

    Read the article

  • CodePlex Daily Summary for Wednesday, July 04, 2012

    CodePlex Daily Summary for Wednesday, July 04, 2012Popular ReleasesMVC Controls Toolkit: Mvc Controls Toolkit 2.2.0: Added Modified all Mv4 related features to conform with the Mvc4 RC Now all items controls accept any IEnumerable<T>(before just List<T> were accepted by most of controls) retrievalManager class that retrieves automatically data from a data source whenever it catchs events triggered by filtering, sorting, and paging controls move method to the updatesManager to move one child objects from a father to another. The move operation can be undone like the insert, update and delete operatio...BlackJumboDog: Ver5.6.6: 2012.07.03 Ver5.6.6 (1) ???????????ftp://?????????、????LIST?????Mini SQL Query: Mini SQL Query (v1.0.68.441): Just a bug fix release for when the connections try to refresh after an edit. Make sure you read the Quickstart for an introduction.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.58: Fix for Issue #18296: provide "ALL" value to the -ignore switch to ignore all error and warning messages. Fix for issue #18293: if encountering EOF before a function declaration or expression is properly closed, throw an appropriate error and don't crash. Adjust the variable-renaming algorithm so it's very specific when renaming variables with the same number of references so a single source file ends up with the same minified names on different platforms. add the ability to specify kno...LogExpert: 1.4 build 4566: This release for the 1.4 version line contains various fixes which have been made some times ago. Until now these fixes were only available in the 1.5 alpha versions. It also contains a fix for: 710. Column finder (press F8 to show) Terminal server issues: Multiple sessions with same user should work now Settings Export/Import available via Settings Dialog still incomple (e.g. tab colors are not saved) maybe I change the file format one day no command line support yet (for importin...DynamicToSql: DynamicToSql 1.0.0 (beta): 1.0.0 beta versionCommonLibrary.NET: CommonLibrary.NET 0.9.8.5 - Final Release: A collection of very reusable code and components in C# 4.0 ranging from ActiveRecord, Csv, Command Line Parsing, Configuration, Holiday Calendars, Logging, Authentication, and much more. FluentscriptCommonLibrary.NET 0.9.8 contains a scripting language called FluentScript. Releases notes for FluentScript located at http://fluentscript.codeplex.com/wikipage?action=Edit&title=Release%20Notes&referringTitle=Documentation Fluentscript - 0.9.8.5 - Final ReleaseApplication: FluentScript Versio...SharePoint 2010 Metro UI: SharePoint 2010 Metro UI8: Please review the documentation link for how to install. Installation takes some basic knowledge of how to upload and edit SharePoint Artifact files. Please view the discussions tab for ongoing FAQsnopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.60: Highlight features & improvements: • Significant performance optimization. • Use AJAX for adding products to the cart. • New flyout mini-shopping cart. • Auto complete suggestions for product searching. • Full-Text support. • EU cookie law support. To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).THE NVL Maker: The NVL Maker Ver 3.51: http://download.codeplex.com/Download?ProjectName=nvlmaker&DownloadId=371510 ????:http://115.com/file/beoef05k#THE-NVL-Maker-ver3.51-sim.7z ????:http://www.mediafire.com/file/6tqdwj9jr6eb9qj/THENVLMakerver3.51tra.7z ======================================== ???? ======================================== 3.51 beta ???: ·?????????????????????? ·?????????,?????????0,?????????????????????? ·??????????????????????????? ·?????????????TJS????(EXP??) ·??4:3???,???????????????,??????????? ·?????????...????: ????2.0.3: 1、???????????。 2、????????。 3、????????????。 4、bug??,????。AssaultCube Reloaded: 2.5 Intrepid: Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, download the Linux package. Try to compile it. If it fails, download a virtual machine. The server pack is ready for both Windows and Linux, but you might need to compile your own for Linux (source included) You should delete /home/config/saved.cfg to reset binds/other stuff If you us...Magelia WebStore Open-source Ecommerce software: Magelia WebStore 2.0: User Right Licensing ContentType version 2.0.267.1Bongiozzo Photosite: Alpha: Just first stable releaseMDS MODELING WORKBOOK: MDS MODELING WORKBOOK: This is the initial release. Works with SQL 2008 R2 Master Data Services. Also works with SQL 2012 Master Data Services but has not been completely tested.Logon Screen Launcher: Logon Screen Launcher 1.3.0: FIXED - Minor handle leak issueBF3Rcon.NET: BF3Rcon.NET 25.0: This update brings the library up to server release R25, which includes the few additions from R21. There are also some minor bug fixes and a couple of other minor changes. In addition, many methods now take advantage of the RconResult class, which will give error information on failed requests; this replaces the bool returned by many methods. There is also an implicit conversion from RconResult to bool (both of which were true on success), so old code shouldn't break. ChangesAdded Player.S...TelerikMvcGridCustomBindingHelper: Version 1.0.15.183-RC: TelerikMvcGridCustomBindingHelper 1.0.15.183 RC This is a RC (release candidate) version, please test and report any error or problem you encounter. Warning: There are many changes in this release and some of them break backward compatibility. Release notes (since 0.5.0-Alpha version): Custom aggregates via an inherited class or inline fluent function Ignore group on aggregates for better performance Projections (restriction of the database columns queried) for an even better performa...PunkBuster™ Screenshot Viewer: PunkBuster™ Screenshot Viewer 1.0: First release of PunkBuster™ Screenshot ViewerDesigning Windows 8 Applications with C# and XAML: Chapters 1 - 7 Release Preview: Source code for all examples from Chapters 1 - 7 for the Release PreviewNew ProjectsAzureMVC4: hiBoonCraft Launcher: BoonCraft Launcher V2.0 See http://352n.dyndns.org for more info on BoonCraftC# to Javascript: Have you ever wanted to automagically have access to the enums you use in your .NET code in the javascript code you're writing for client-side?CMCIC payment gateway provider for NB_Store: CMCIC payment gateway provider for NB_StoreCOFE2 : Cloud Over IFileSystemInfo Entries Extensions: COFE2 enable user to access the user-defined file system on local or foreign computer, using a System.IO-like interface or a RESTful Web API.Directory access via LDAP: .NET library for managing a directory via LDAP.E-mail processing: .NET library for processing e-mail.FAST Search for SharePoint Query Statistics: F4SP Query Statistics scans the FAST for SharePoint Query Logs and presents statistics based on the logs. Total Queries, Top Queries, Queries per user etc...File Backup: This project is an open source windows azure cloud backup win forms application.HanxiaoFu's personal: This will help synchronizing my work done in home and at workLifekostyuk: This is my first project on TFSNet WebSocket Server: NetWebSocket Server is c# based hight performance and scalable Websocket server. Posroid for Windows 8: ?? ??????? ????? ?? ?? ????? ??? ?? ??? ??? ???? ? ? ???, ???8??? ???? ?? ??? ? ? ??? ??? ????? ?????.PowerRules: PowerRules is a group of scripts that help you audit your farm for Configuration Drift (Configuration changes over time)Projet Niloc TETRAS: Student Project to know how to manage and coordinating a team.proyectobanco: PROFE AQUI ESTA EL PROYECTO DISCULPE NOMAS ATT SANCANsheetengine - Isometric HTML5 JavaScript Display Engine: Sheetengine is an HTML5 canvas based isometric display engine for JavaScript. It features textures, z-ordering, shadows, intersecting sheets, object movements.Shiny2: GTS Spoofing program for Generation IV and V of Pokemon.SMS Backup & Restore XML to MySQL: The purpose is to take the XML files created by SMS Backup and Restore (Android) and importing them via a Dropbox/Google Drive synch into a MySQL dbStundenplan TSST: App für Windows Phone um die einzelnen Vertretungspläne der technischen Schule Steinfurt anzusehenswalmacenamiento: Proyecto para el almacenamiento de registrosTFS Work Item Association Check-in Policy: This policy requires TFS source control check-ins to be associated with a single, in-progress task that is assigned to you.TurboTemplate: TurboTemplate is a fast source code generation helper which quick transforms between your SQL database and some templated text of your choice.visblog: this is short summary of my projectVisualHG_fliedonion: This is fork of VisualHG. This will used by improve VisualHG for me. support only Visual Studio 2008 (not SP1). Wave Tag Library: A very simple and modest .wav file tag library. With this library you can load .wav files, edit the tags (equivalent to mp3's ID3 tags) and save back to file.Wordpress: WordPress is web software you can use to create a beautiful website or blog. We like to say that WordPress is both free and priceless at the same time.ZEAL-C02 Bluetooth module Driver for Netduino: A class library for the .NET Micro Framework to support the Zeal-C02 Bluetooth module for Netduino.

    Read the article

  • Wireless is detected, but not connecting. Ethernet works. How to correct the wireless address?

    - by Lucas
    I am running Ubuntu 14.04 with cable internet, and my wireless is detected and connected, but I cannot connect to the internet. I know the problem is with my machine because other machines are connecting to the same router just fine. I can connect via ethernet just fine as well. Here are some notable tests: ping 192.168.0.105 works with 0% packet loss, but ping 192.168.0.1 has 100% packet loss. When I plug in my ethernet, ping 192.168.0.1 works with 0% packet loss. My wireless name is tg, and the router ip is 192.168.0.1 (where I can enter username and password). I suspect that I need to change my wireless address from 192.168.0.105 to 192.168.0.1. Any suggestions on how to proceed? extra info: [lucas@lucas-ThinkPad-W520]/home/lucas$ iwconfig eth0 no wireless extensions. lo no wireless extensions. wlan0 IEEE 802.11abgn ESSID:"tg" Mode:Managed Frequency:2.462 GHz Access Point: 00:02:6F:83:F8:F4 Bit Rate=1 Mb/s Tx-Power=15 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=62/70 Signal level=-48 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:52 Invalid misc:166 Missed beacon:0 [lucas@lucas-ThinkPad-W520]/home/lucas$ ifconfig eth0 Link encap:Ethernet HWaddr f0:de:f1:b2:53:53 inet addr:192.168.0.100 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::f2de:f1ff:feb2:5353/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:980003 errors:0 dropped:0 overruns:0 frame:0 TX packets:498384 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1320506168 (1.3 GB) TX bytes:59780591 (59.7 MB) Interrupt:20 Memory:f3a00000-f3a20000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:21927 errors:0 dropped:0 overruns:0 frame:0 TX packets:21927 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1781719 (1.7 MB) TX bytes:1781719 (1.7 MB) wlan0 Link encap:Ethernet HWaddr 24:77:03:29:8f:dc inet addr:192.168.0.105 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::2677:3ff:fe29:8fdc/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:11828 errors:0 dropped:0 overruns:0 frame:0 TX packets:15444 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4855662 (4.8 MB) TX bytes:2250585 (2.2 MB) [lucas@lucas-ThinkPad-W520]/home/lucas$ lspci -nn | grep 0280 03:00.0 Network controller [0280]: Intel Corporation Centrino Ultimate-N 6300 [8086:4238] (rev 3e) [lucas@lucas-ThinkPad-W520]/home/lucas$ rfkill list 0: hci0: Bluetooth Soft blocked: no Hard blocked: no 1: tpacpi_bluetooth_sw: Bluetooth Soft blocked: no Hard blocked: no 2: phy0: Wireless LAN Soft blocked: no Hard blocked: no with ethernet unplugged: [lucas@lucas-ThinkPad-W520]/home/lucas$ route -n | grep UG 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 wlan0 with ethernet plugged in: [lucas@lucas-ThinkPad-W520]/home/lucas$ route -n | grep UG 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth0 [lucas@lucas-ThinkPad-W520]/home/lucas$ nm-tool NetworkManager Tool State: connected (global) - Device: wlan0 [tg] ---------------------------------------------------------- Type: 802.11 WiFi Driver: iwlwifi State: connected Default: no HW Address: 24:77:03:29:8F:DC Capabilities: Speed: 52 Mb/s Wireless Properties WEP Encryption: yes WPA Encryption: yes WPA2 Encryption: yes Wireless Access Points (* = current AP) tatum: Infra, 40:8B:07:D8:A5:04, Freq 2437 MHz, Rate 54 Mb/s, Strength 42 W PA WPA2 ums: Infra, 00:20:A6:72:52:BF, Freq 2437 MHz, Rate 54 Mb/s, Strength 59 Alpha 40: Infra, 28:CF:E9:86:59:5D, Freq 5260 MHz, Rate 54 Mb/s, Strength 30 W PA WPA2 thepromiselan: Infra, 58:6D:8F:51:E5:54, Freq 2452 MHz, Rate 54 Mb/s, Strength 34 $ PA WPA2 xfinitywifi: Infra, 06:1D:D5:84:27:A0, Freq 2437 MHz, Rate 54 Mb/s, Strength 52 *tg: Infra, 00:02:6F:83:F8:F4, Freq 2462 MHz, Rate 54 Mb/s, Strength 73 W PA2 ums: Infra, 00:20:A6:A1:9F:25, Freq 2452 MHz, Rate 54 Mb/s, Strength 44 BRIAN-PC_Network:Infra, 20:AA:4B:DD:93:D6, Freq 2462 MHz, Rate 54 Mb/s, Strength 35 W PA2 HOME-C0F8: Infra, 44:32:C8:D2:C0:F8, Freq 2412 MHz, Rate 54 Mb/s, Strength 40 W PA WPA2 abcsexy: Infra, 28:28:5D:27:5D:85, Freq 2412 MHz, Rate 54 Mb/s, Strength 27 W PA WPA2 IPv4 Settings: Address: 192.168.0.105 Prefix: 24 (255.255.255.0) Gateway: 192.168.0.1 DNS: 192.168.0.1 - Device: eth0 [Wired connection 1] ------------------------------------------- Type: Wired Driver: e1000e State: connected Default: yes HW Address: F0:DE:F1:B2:53:53 Capabilities: Carrier Detect: yes Speed: 100 Mb/s Wired Properties Carrier: on IPv4 Settings: Address: 192.168.0.100 Prefix: 24 (255.255.255.0) Gateway: 192.168.0.1 DNS: 192.168.0.1

    Read the article

  • Thoughts on Thoughts on TDD

    Brian Harry wrote a post entitled Thoughts on TDD that I thought I was going to let lie, but I find that I need to write a response. I find myself in agreement with Brian on many points in the post, but I disagree with his conclusion. Not surprisingly, I agree with the things that he likes about TDD. Focusing on the usage rather than the implementation is really important, and this is important whether you use TDD or not. And YAGNI was a big theme in my Seven Deadly Sins of Programming series. Now, on to what he doesnt like. He says that he finds it inefficient to have tests that he has to change every time he refactors. Here is where we part company. If you are having to do a lot of test rewriting (say, more than a couple of minutes work to get back to green) *often* when you are refactoring your code, I submit that either you are testing things that you dont need to test (internal details rather than external implementation), your code perhaps isnt as decoupled as it could be, or maybe you need a visit to refactorers anonymous. I also like to refactor like crazy, but as we all know, the huge downside of refactoring is that we often break things. Important things. Subtle things. Which makes refactoring risky. *Unless* we have a set of tests that have great coverage. And TDD (or Example-based Design, which I prefer as a term) gives those to us. Now, I dont know what sort of coverage Brian gets with the unit tests that he writes, but I do know that for the majority of the developers Ive worked with and I count myself in that bucket the coverage of unit tests written afterwards is considerably inferior to the coverage of unit tests that come from TDD. For me, it all comes down to the answer to the following question: How do you ensure that your code works now and will continue to work in the future? Im willing to put up with a little efficiency on the front side to get that benefit later. Its not the writing of the code thats the expensive part, its everything else that comes after. I dont think that stepping through test cases in the debugger gets you what you want. You can verify what the current behavior is, sure, and do it fairly cheaply, but you dont help the guy in the future who doesnt know what conditions were important if he has to change your code. His second part that he doesnt like backing into an architecture (go read to see what he means). Ive certainly had to work with code that was like this before, and its a nightmare the code that nobody wants to touch. But thats not at all the kind of code that you get with TDD, because if youre doing it right youre doing the write a failing tests, make it pass, refactor approach. Now, you may miss some useful refactorings and generalizations for this, but if you do, you can refactor later because you have the tests that make it safe to do so, and your code tends to be easy to refactor because the same things that make code easy to write unit tests for make it easy to refactor. I also think Brian is missing an important point. We arent all as smart as he is. Im reminded a bit of the lesson of Intentional Programming, Charles Simonyis paradigm for making programming easier. I played around with Intentional Programming when it was young, and came to the conclusion that it was a pretty good thing if you were as smart as Simonyi is, but it was pretty much a disaster if you were an average developer. In this case, TDD gives you a way to work your way into a good, flexible, and functional architecture when you dont have somebody of Brians talents to help you out. And thats a good thing.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 953 954 955 956 957 958 959 960 961 962 963 964  | Next Page >