Search Results

Search found 66534 results on 2662 pages for 'document set'.

Page 924/2662 | < Previous Page | 920 921 922 923 924 925 926 927 928 929 930 931  | Next Page >

  • Do’s and Don’ts Building SharePoint Applications

    - by Bil Simser
    SharePoint is a great platform for building quick LOB applications. Simple things from employee time trackers to server and software inventory to full blown Help Desks can be crafted up using SharePoint from just customizing Lists. No programming necessary. However there are a few tricks I’ve painfully learned over the years that you can use for your own solutions. DO What’s In A Name? When you create a new list, column, or view you’ll commonly name it something like “Expense Reports”. However this has the ugly effect of creating a url to the list as “Expense%20Reports”. Or worse, an internal field name of “Expense_x0x0020_Reports” which is not only cryptic but hard to remember when you’re trying to find the column by internal name. While “Expense Reports 2011” is user friendly, “ExpenseReports2011” is not (unless you’re a programmer). So that’s not the solution. Well, not entirely. Instead when you create your column or list or view use the scrunched up name (I can’t think of the technical term for it right now) of “ExpenseReports2011”, “WomenAtTheOfficeThatAreMen” or “KoalaMeatIsGoodWhenBroiled”. After you’ve created it, go back and change the name to the more friendly “Silly Expense Reports That Nobody Reads”. The original internal name will be the url and code friendly one without spaces while the one used on data entry forms and view headers will be the human version. Smart Columns When building a view include columns that make sense. By default when you add a column the “Add to default view” is checked. Resist the urge to be lazy and leave it checked. Uncheck that puppy and decide consciously what columns should be included in the view. Pick columns that make sense to what the user is trying to do. This means you have to talk to the user. Yes, I know. That can be trying at times and even painful. Go ahead, talk to them. You might learn something. Find out what’s important to them and why. If they’re doing something repetitively as part of their job, try to make their life easier by including what’s most important to them. Do they really need to see the Created *and* Modified date of a document or do they just need the title and author? You’ll only find out after talking to them (or getting them drunk in a bar and leaving them in the back alley handcuffed to a garbage bin, don’t ask). Gotta Keep it Separated Hey, views are there for a reason. Use them. While “All Items” is a fine way to present a list of well, all items, it’s hardly sufficient to present a list of servers built before the Y2K bug hit. You’ll be scrolling the list for hours finally arriving at Page 387 of 12,591 and cursing that SharePoint guy for convincing you that putting your hardware into a list would be of any use to anyone. Next to collecting the data, presenting it is just as important. Views are often overlooked and many times ignored or misused. They’re the way you can slice and dice the data up so that you’re not trying to consume 3,000 years of human evolution on a single web page. Remember views can be filtered so feel free to create a view for each status or one for each operating system or one for each species of Information Worker you might be putting in that list or document library. Not only will it reduce the number of items someone sees at one time, it’ll also make the information that much more relevant. Also remember that each view is a separate page. Use it in navigation by creating a menu on the Quick Launch to each view. The discoverability of the Views menu isn’t overly obvious and if you violate the rule of columns (see Horizontally Scrolling below) the view menu doesn’t even show up until you shuffle the scroll bar to the left. Navigation links, big giant buttons, a screaming flashing “CLICK ME NOW” will help your users find their way. Sort It! Views are great so we’re building nice, rich views for the user. Awesomesauce. However sort is not very discoverable by the user. For example when you’re looking at a view how do you know if it’s ascending or descending and what is it sorted on. Maybe it’s sorted using two fields so what’s that all about? Help your users by letting them know the information they’re looking at is sorted. Maybe you name the view something appropriate like “Bogus Expense Claims Sorted By Deadbeats”. If you use the naming strategy just make sure you keep the name consistent with the description. In the previous example their better be a Deadbeat column so I can see the sort in action. Having a “Loser” column, while equally correct, is a little obtuse to the average Information Worker. Remember, they usually don’t use acronyms and even if they knew how to, it’s not immediately obvious to them that’s what you’re trying to convey. Another option is to simply drop a Content Editor Web Part above the list and explain exactly the view they’re looking at. Each view is it’s own page so one CEWP won’t be used across the board. Be descriptive in what the user is seeing but try to keep it brief. Dumping the first chapter of I, Claudius might be informative to the data but can gobble up screen real estate and miss the point of having the list. DO NOT Useless Attachments The attachments column is, in a word, useless. For the most part. Sure it indicates there’s an attachment on the list item but in the grand scheme of things that’s not overly informative. Maybe it is and by all means, if it makes sense to you include it. Colour it. Make it shine and stand like the Return of Clippy on every SharePoint list. Without it being functional it can be boring. EndUserSharePoint.com has an article to make the son of Clippy that much more useful so feel free to head over and check out this blog post by Paul Grenier on the task (Warning code ahead! Danger Will Robinson!) In any case, I would suggest you remove it from your views. Again if it’s important then include it but consider the jQuery solution above to make it functional. It’s added by default to views and one of things that people forget to clean up. Horizontal Scrolling Screen real estate is premium so building a list that contains 8,000 columns and stretches horizontally across 15 screens probably isn’t the most user friendly experience. Most users can’t figure out how to scroll vertically let alone horizontally so don’t make it even that more confusing for them. Take the Steve Krug approach in your view designs and try not to make the user think. Again views are your friend. Consider splitting up the data into views where one view contains 10 columns and other view contains the other 10. Okay, maybe your information doesn’t work that way but humans can only process 7 pieces of data at a time, 10 at most (then their heads explode and you don’t want to clean that mess up, especially on a Friday night before the big dance). It drives me batshit crazy when I see a view with 80 columns of data. I often ask the user “So what do you do with all this information”. The response is usually “With this data [the first 10 columns] I decide if I’m going to fire everyone, and with this data [the next 10 columns] I decide if I’m going to set the building on fire and collect the insurance”. It’s at that point I show them how to create two new views “People Who Are About To Get The Axe” and “Beach Time For The Executives”. Again, talk to your users and try to reason with them on cutting down the number of columns they see at once. Vertical Scrolling Another big faux pas I find is the use of multi-line comment fields in views. It’s not so bad when you have a statement like this in your view: “I really like, oh my god, thought I was going to scream when I saw this turtle then I decided what I was going to have for dinner and frankly I hate having to work late so when I was talking to the customer I thought, oh my god, what if the customer has turtles and then it appeared to me that I really was hungry so I'm going to have lunch now.” It’s fine if that’s the only column along with two or three others, but once you slap those 20 columns of data into the list, the comment field wraps and forms a new multi-page novel that takes up your entire screen. Do everyone a favour and just avoid adding the column to views. Train the user to just click through to the item if they need to see the contents. Duplicate Information Duplication is never good. Views and great as you can group data together. For example create a view of project status reports grouped by author. Then you can see what project manager is being a dip and not submitting their report. However if you group by author do you really need the Created By field as well in the view? Or if the view is grouped by Project then Author do you need both. Horizontal real estate is always at a premium so try not to clutter up the view with duplicate data like this. Oh  yeah, if you’re scratching your head saying “But Bil, if I don’t include the Project name in the view and I have a lot of items then how do I know which one I’m looking at”. That’s a hint that your grouping is too vague or you have too much data in the view based on that criteria. Filter it down a notch, create some views, and try to keep the group down to a single screen where you can see the group header at the top of the page. Again it’s just managing the information you have. Redundant, See Redundant This partially relates to duplicate information and smart columns but basically remember to not include the obvious in a view. Remember, don’t make me think. If you’ve gone to the trouble (and it was a lot of trouble wasn’t it?) to create separate views of your data by creating a “September Zombie Brain Sales”, “October Zombie Brain Sales”, etc. then please for the love of all that is holy do not include the Month and Product columns in your view. Similarly if you create a “My” view of anything (“My Favourite Brands of Spandex”, “My Co-Workers I Find The Urge To Disinfect”) then again, do not include the owner or author field (or whatever field you use to identify “My”). That’s just silly. Hope that helps! Happy customizing!

    Read the article

  • How to delete/edit files from readonly filesystem

    - by Santosh Linkha
    I am having problem with my memory device (actually a memory card that act external memory device like pendrive). experimentx@workmateX:/var/www/zendtest$ sudo rm /media/A88F-8788/python-2.7.1-docs-html.zip rm: cannot remove `/media/A88F-8788/python-2.7.1-docs-html.zip': Read-only file system I tried to change the file permission of the system but that doesn't work experimentx@workmateX:/var/www/zendtest$ sudo chmod 0777 /media/A88F-8788/python-2.7.1-docs-html.zip chmod: changing permissions of `/media/A88F-8788/python-2.7.1-docs-html.zip': Read-only file system But it perfectly works on windows. UPDATE On opening the drive and running command sudo mount -o remount,rw /media/A88F-8788 /var/log/syslog: Mar 23 15:29:48 workmateX kernel: [18042.257407] fat_get_cluster: 11 callbacks suppressed Mar 23 15:29:48 workmateX kernel: [18042.257414] FAT: Filesystem error (dev sdb1) Mar 23 15:29:48 workmateX kernel: [18042.257418] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:29:48 workmateX kernel: [18042.257425] FAT: Filesystem has been set read-only Mar 23 15:29:48 workmateX kernel: [18042.258187] FAT: Filesystem error (dev sdb1) Mar 23 15:29:48 workmateX kernel: [18042.258194] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:35 workmateX kernel: [18149.333787] FAT: Filesystem error (dev sdb1) Mar 23 15:31:35 workmateX kernel: [18149.333795] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:35 workmateX kernel: [18149.335949] FAT: Filesystem error (dev sdb1) Mar 23 15:31:35 workmateX kernel: [18149.335957] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:35 workmateX kernel: [18149.354903] FAT: Filesystem error (dev sdb1) Mar 23 15:31:35 workmateX kernel: [18149.354911] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:35 workmateX kernel: [18149.357213] FAT: Filesystem error (dev sdb1) Mar 23 15:31:35 workmateX kernel: [18149.357221] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:35 workmateX kernel: [18149.359547] FAT: Filesystem error (dev sdb1) Mar 23 15:31:35 workmateX kernel: [18149.359555] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:35 workmateX kernel: [18149.361929] FAT: Filesystem error (dev sdb1) Mar 23 15:31:35 workmateX kernel: [18149.361936] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:35 workmateX kernel: [18149.377416] FAT: Filesystem error (dev sdb1) Mar 23 15:31:35 workmateX kernel: [18149.377424] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:35 workmateX kernel: [18149.379384] FAT: Filesystem error (dev sdb1) Mar 23 15:31:35 workmateX kernel: [18149.379392] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:35 workmateX kernel: [18149.381898] FAT: Filesystem error (dev sdb1) Mar 23 15:31:35 workmateX kernel: [18149.381906] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:35 workmateX kernel: [18149.383764] FAT: Filesystem error (dev sdb1) Mar 23 15:31:35 workmateX kernel: [18149.383772] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:40 workmateX kernel: [18154.569747] fat_get_cluster: 11 callbacks suppressed Mar 23 15:31:40 workmateX kernel: [18154.569754] FAT: Filesystem error (dev sdb1) Mar 23 15:31:40 workmateX kernel: [18154.569758] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:40 workmateX kernel: [18154.569765] FAT: Filesystem has been set read-only Mar 23 15:31:40 workmateX kernel: [18154.572022] FAT: Filesystem error (dev sdb1) Mar 23 15:31:40 workmateX kernel: [18154.572029] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:40 workmateX kernel: [18154.582933] FAT: Filesystem error (dev sdb1) Mar 23 15:31:40 workmateX kernel: [18154.582941] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:40 workmateX kernel: [18154.585921] FAT: Filesystem error (dev sdb1) Mar 23 15:31:40 workmateX kernel: [18154.585929] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:40 workmateX kernel: [18154.587819] FAT: Filesystem error (dev sdb1) Mar 23 15:31:40 workmateX kernel: [18154.587827] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:40 workmateX kernel: [18154.597547] FAT: Filesystem error (dev sdb1) Mar 23 15:31:40 workmateX kernel: [18154.597555] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:40 workmateX kernel: [18154.599503] FAT: Filesystem error (dev sdb1) Mar 23 15:31:40 workmateX kernel: [18154.599511] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:40 workmateX kernel: [18154.602896] FAT: Filesystem error (dev sdb1) Mar 23 15:31:40 workmateX kernel: [18154.602905] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:40 workmateX kernel: [18154.615338] FAT: Filesystem error (dev sdb1) Mar 23 15:31:40 workmateX kernel: [18154.615346] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:40 workmateX kernel: [18154.618574] FAT: Filesystem error (dev sdb1) Mar 23 15:31:40 workmateX kernel: [18154.618581] fat_get_cluster: invalid cluster chain (i_pos 0) var/log/message: Mar 23 15:29:48 workmateX kernel: [18042.257407] fat_get_cluster: 11 callbacks suppressed Mar 23 15:31:40 workmateX kernel: [18154.569747] fat_get_cluster: 11 callbacks suppressed

    Read the article

  • Android SDK PATH Error: doesn´t find home directory [migrated]

    - by THeo Oliveira
    I use Ubuntu 12.10 and I tried to follow this guide Install android sdk and eclipse in Ubuntu 12.04 but I keep getting this error: Error: The AVD manager normally uses the user's profile directory to store AVD files. However it failed to find the default profile directory. To fix this, please set the environment variable ANDROID_SDK_HOME to a valid path such as ¨~¨. Any ideas how can I fix this?

    Read the article

  • using Generics in C# [closed]

    - by Uphaar Goyal
    I have started looking into using generics in C#. As an example what i have done is that I have an abstract class which implements generic methods. these generic methods take a sql query, a connection string and the Type T as parameters and then construct the data set, populate the object and return it back. This way each business object does not need to have a method to populate it with data or construct its data set. All we need to do is pass the type, the sql query and the connection string and these methods do the rest.I am providing the code sample here. I am just looking to discuss with people who might have a better solution to what i have done. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Data; using System.Data.SqlClient; using MWTWorkUnitMgmtLib.Business; using System.Collections.ObjectModel; using System.Reflection; namespace MWTWorkUnitMgmtLib.TableGateway { public abstract class TableGateway { public TableGateway() { } protected abstract string GetConnection(); protected abstract string GetTableName(); public DataSet GetDataSetFromSql(string connectionString, string sql) { DataSet ds = null; using (SqlConnection connection = new SqlConnection(connectionString)) using (SqlCommand command = connection.CreateCommand()) { command.CommandText = sql; connection.Open(); using (ds = new DataSet()) using (SqlDataAdapter adapter = new SqlDataAdapter(command)) { adapter.Fill(ds); } } return ds; } public static bool ContainsColumnName(DataRow dr, string columnName) { return dr.Table.Columns.Contains(columnName); } public DataTable GetDataTable(string connString, string sql) { DataSet ds = GetDataSetFromSql(connString, sql); DataTable dt = null; if (ds != null) { if (ds.Tables.Count 0) { dt = ds.Tables[0]; } } return dt; } public T Construct(DataRow dr, T t) where T : class, new() { Type t1 = t.GetType(); PropertyInfo[] properties = t1.GetProperties(); foreach (PropertyInfo property in properties) { if (ContainsColumnName(dr, property.Name) && (dr[property.Name] != null)) property.SetValue(t, dr[property.Name], null); } return t; } public T GetByID(string connString, string sql, T t) where T : class, new() { DataTable dt = GetDataTable(connString, sql); DataRow dr = dt.Rows[0]; return Construct(dr, t); } public List GetAll(string connString, string sql, T t) where T : class, new() { List collection = new List(); DataTable dt = GetDataTable(connString, sql); foreach (DataRow dr in dt.Rows) collection.Add(Construct(dr, t)); return collection; } } }

    Read the article

  • Sixeyed.Caching available now on NuGet and GitHub!

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/22/sixeyed.caching-available-now-on-nuget-and-github.aspxThe good guys at Pluralsight have okayed me to publish my caching framework (as seen in Caching in the .NET Stack: Inside-Out) as an open-source library, and it’s out now. You can get it here: Sixeyed.Caching source code on GitHub, and here: Sixeyed.Caching package v1.0.0 on NuGet. If you haven’t seen the course, there’s a preview here on YouTube: In-Process and Out-of-Process Caches, which gives a good flavour. The library is a wrapper around various cache providers, including the .NET MemoryCache, AppFabric cache, and  memcached*. All the wrappers inherit from a base class which gives you a set of common functionality against all the cache implementations: •    inherits OutputCacheProvider, so you can use your chosen cache provider as an ASP.NET output cache; •    serialization and encryption, so you can configure whether you want your cache items serialized (XML, JSON or binary) and encrypted; •    instrumentation, you can optionally use performance counters to monitor cache attempts and hits, at a low level. The framework wraps up different caches into an ICache interface, and it lets you use a provider directly like this: Cache.Memory.Get<RefData>(refDataKey); - or with configuration to use the default cache provider: Cache.Default.Get<RefData>(refDataKey); The library uses Unity’s interception framework to implement AOP caching, which you can use by flagging methods with the [Cache] attribute: [Cache] public RefData GetItem(string refDataKey) - and you can be more specific on the required cache behaviour: [Cache(CacheType=CacheType.Memory, Days=1] public RefData GetItem(string refDataKey) - or really specific: [Cache(CacheType=CacheType.Disk, SerializationFormat=SerializationFormat.Json, Hours=2, Minutes=59)] public RefData GetItem(string refDataKey) Provided you get instances of classes with cacheable methods from the container, the attributed method results will be cached, and repeated calls will be fetched from the cache. You can also set a bunch of cache defaults in application config, like whether to use encryption and instrumentation, and whether the cache system is enabled at all: <sixeyed.caching enabled="true"> <performanceCounters instrumentCacheTotalCounts="true" instrumentCacheTargetCounts="true" categoryNamePrefix ="Sixeyed.Caching.Tests"/> <encryption enabled="true" key="1234567890abcdef1234567890abcdef" iv="1234567890abcdef"/> <!-- key must be 32 characters, IV must be 16 characters--> </sixeyed.caching> For AOP and methods flagged with the cache attribute, you can override the compile-time cache settings at runtime with more config (keyed by the class and method name): <sixeyed.caching enabled="true"> <targets> <target keyPrefix="MethodLevelCachingStub.GetRandomIntCacheConfiguredInternal" enabled="false"/> <target keyPrefix="MethodLevelCachingStub.GetRandomIntCacheExpiresConfiguredInternal" seconds="1"/> </targets> It’s released under the MIT license, so you can use it freely in your own apps and modify as required. I’ll be adding more content to the GitHub wiki, which will be the main source of documentation, but for now there’s an FAQ to get you started. * - in the course the framework library also wraps NCache Express, but there's no public redistributable library that I can find, so it's not in Sixeyed.Caching.

    Read the article

  • Creating a Dynamic DataRow for easier DataRow Syntax

    - by Rick Strahl
    I've been thrown back into an older project that uses DataSets and DataRows as their entity storage model. I have several applications internally that I still maintain that run just fine (and I sometimes wonder if this wasn't easier than all this ORM crap we deal with with 'newer' improved technology today - but I disgress) but use this older code. For the most part DataSets/DataTables/DataRows are abstracted away in a pseudo entity model, but in some situations like queries DataTables and DataRows are still surfaced to the business layer. Here's an example. Here's a business object method that runs dynamic query and the code ends up looping over the result set using the ugly DataRow Array syntax:public int UpdateAllSafeTitles() { int result = this.Execute("select pk, title, safetitle from " + Tablename + " where EntryType=1", "TPks"); if (result < 0) return result; result = 0; foreach (DataRow row in this.DataSet.Tables["TPks"].Rows) { string title = row["title"] as string; string safeTitle = row["safeTitle"] as string; int pk = (int)row["pk"]; string newSafeTitle = this.GetSafeTitle(title); if (newSafeTitle != safeTitle) { this.ExecuteNonQuery("update " + this.Tablename + " set safeTitle=@safeTitle where pk=@pk", this.CreateParameter("@safeTitle",newSafeTitle), this.CreateParameter("@pk",pk) ); result++; } } return result; } The problem with looping over DataRow objecs is two fold: The array syntax is tedious to type and not real clear to look at, and explicit casting is required in order to do anything useful with the values. I've highlighted the place where this matters. Using the DynamicDataRow class I'll show in a minute this code can be changed to look like this:public int UpdateAllSafeTitles() { int result = this.Execute("select pk, title, safetitle from " + Tablename + " where EntryType=1", "TPks"); if (result < 0) return result; result = 0; foreach (DataRow row in this.DataSet.Tables["TPks"].Rows) { dynamic entry = new DynamicDataRow(row); string newSafeTitle = this.GetSafeTitle(entry.title); if (newSafeTitle != entry.safeTitle) { this.ExecuteNonQuery("update " + this.Tablename + " set safeTitle=@safeTitle where pk=@pk", this.CreateParameter("@safeTitle",newSafeTitle), this.CreateParameter("@pk",entry.pk) ); result++; } } return result; } The code looks much a bit more natural and describes what's happening a little nicer as well. Well, using the new dynamic features in .NET it's actually quite easy to implement the DynamicDataRow class. Creating your own custom Dynamic Objects .NET 4.0 introduced the Dynamic Language Runtime (DLR) and opened up a whole bunch of new capabilities for .NET applications. The dynamic type is an easy way to avoid Reflection and directly access members of 'dynamic' or 'late bound' objects at runtime. There's a lot of very subtle but extremely useful stuff that dynamic does (especially for COM Interop scenearios) but in its simplest form it often allows you to do away with manual Reflection at runtime. In addition you can create DynamicObject implementations that can perform  custom interception of member accesses and so allow you to provide more natural access to more complex or awkward data structures like the DataRow that I use as an example here. Bascially you can subclass DynamicObject and then implement a few methods (TryGetMember, TrySetMember, TryInvokeMember) to provide the ability to return dynamic results from just about any data structure using simple property/method access. In the code above, I created a custom DynamicDataRow class which inherits from DynamicObject and implements only TryGetMember and TrySetMember. Here's what simple class looks like:/// <summary> /// This class provides an easy way to turn a DataRow /// into a Dynamic object that supports direct property /// access to the DataRow fields. /// /// The class also automatically fixes up DbNull values /// (null into .NET and DbNUll to DataRow) /// </summary> public class DynamicDataRow : DynamicObject { /// <summary> /// Instance of object passed in /// </summary> DataRow DataRow; /// <summary> /// Pass in a DataRow to work off /// </summary> /// <param name="instance"></param> public DynamicDataRow(DataRow dataRow) { DataRow = dataRow; } /// <summary> /// Returns a value from a DataRow items array. /// If the field doesn't exist null is returned. /// DbNull values are turned into .NET nulls. /// /// </summary> /// <param name="binder"></param> /// <param name="result"></param> /// <returns></returns> public override bool TryGetMember(GetMemberBinder binder, out object result) { result = null; try { result = DataRow[binder.Name]; if (result == DBNull.Value) result = null; return true; } catch { } result = null; return false; } /// <summary> /// Property setter implementation tries to retrieve value from instance /// first then into this object /// </summary> /// <param name="binder"></param> /// <param name="value"></param> /// <returns></returns> public override bool TrySetMember(SetMemberBinder binder, object value) { try { if (value == null) value = DBNull.Value; DataRow[binder.Name] = value; return true; } catch {} return false; } } To demonstrate the basic features here's a short test: [TestMethod] [ExpectedException(typeof(RuntimeBinderException))] public void BasicDataRowTests() { DataTable table = new DataTable("table"); table.Columns.Add( new DataColumn() { ColumnName = "Name", DataType=typeof(string) }); table.Columns.Add( new DataColumn() { ColumnName = "Entered", DataType=typeof(DateTime) }); table.Columns.Add(new DataColumn() { ColumnName = "NullValue", DataType = typeof(string) }); DataRow row = table.NewRow(); DateTime now = DateTime.Now; row["Name"] = "Rick"; row["Entered"] = now; row["NullValue"] = null; // converted in DbNull dynamic drow = new DynamicDataRow(row); string name = drow.Name; DateTime entered = drow.Entered; string nulled = drow.NullValue; Assert.AreEqual(name, "Rick"); Assert.AreEqual(entered,now); Assert.IsNull(nulled); // this should throw a RuntimeBinderException Assert.AreEqual(entered,drow.enteredd); } The DynamicDataRow requires a custom constructor that accepts a single parameter that sets the DataRow. Once that's done you can access property values that match the field names. Note that types are automatically converted - no type casting is needed in the code you write. The class also automatically converts DbNulls to regular nulls and vice versa which is something that makes it much easier to deal with data returned from a database. What's cool here isn't so much the functionality - even if I'd prefer to leave DataRow behind ASAP -  but the fact that we can create a dynamic type that uses a DataRow as it's 'DataSource' to serve member values. It's pretty useful feature if you think about it, especially given how little code it takes to implement. By implementing these two simple methods we get to provide two features I was complaining about at the beginning that are missing from the DataRow: Direct Property Syntax Automatic Type Casting so no explicit casts are required Caveats As cool and easy as this functionality is, it's important to understand that it doesn't come for free. The dynamic features in .NET are - well - dynamic. Which means they are essentially evaluated at runtime (late bound). Rather than static typing where everything is compiled and linked by the compiler/linker, member invokations are looked up at runtime and essentially call into your custom code. There's some overhead in this. Direct invocations - the original code I showed - is going to be faster than the equivalent dynamic code. However, in the above code the difference of running the dynamic code and the original data access code was very minor. The loop running over 1500 result records took on average 13ms with the original code and 14ms with the dynamic code. Not exactly a serious performance bottleneck. One thing to remember is that Microsoft optimized the DLR code significantly so that repeated calls to the same operations are routed very efficiently which actually makes for very fast evaluation. The bottom line for performance with dynamic code is: Make sure you test and profile your code if you think that there might be a performance issue. However, in my experience with dynamic types so far performance is pretty good for repeated operations (ie. in loops). While usually a little slower the perf hit is a lot less typically than equivalent Reflection work. Although the code in the second example looks like standard object syntax, dynamic is not static code. It's evaluated at runtime and so there's no type recognition until runtime. This means no Intellisense at development time, and any invalid references that call into 'properties' (ie. fields in the DataRow) that don't exist still cause runtime errors. So in the case of the data row you still get a runtime error if you mistype a column name:// this should throw a RuntimeBinderException Assert.AreEqual(entered,drow.enteredd); Dynamic - Lots of uses The arrival of Dynamic types in .NET has been met with mixed emotions. Die hard .NET developers decry dynamic types as an abomination to the language. After all what dynamic accomplishes goes against all that a static language is supposed to provide. On the other hand there are clearly scenarios when dynamic can make life much easier (COM Interop being one place). Think of the possibilities. What other data structures would you like to expose to a simple property interface rather than some sort of collection or dictionary? And beyond what I showed here you can also implement 'Method missing' behavior on objects with InvokeMember which essentially allows you to create dynamic methods. It's all very flexible and maybe just as important: It's easy to do. There's a lot of power hidden in this seemingly simple interface. Your move…© Rick Strahl, West Wind Technologies, 2005-2011Posted in CSharp  .NET   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Ghost Records, Backups, and Database Compression…With a Pinch of Security Considerations

    - by Argenis
      Today Jeffrey Langdon (@jlangdon) posed on #SQLHelp the following questions: So I set to answer his question, and I said to myself: “Hey, I haven’t blogged in a while, how about I blog about this particular topic?”. Thus, this post was born. (If you have never heard of Ghost Records and/or the Ghost Cleanup Task, go see this blog post by Paul Randal) 1) Do ghost records get copied over in a backup? If you guessed yes, you guessed right. The backup process in SQL Server takes all data as it is on disk – it doesn’t crack the pages open to selectively pick which slots have actual data and which ones do not. The whole page is backed up, regardless of its contents. Even if ghost cleanup has run and processed the ghost records, the slots are not overwritten immediately, but rather until another DML operation comes along and uses them. As a matter of fact, all of the allocated space for a database will be included in a full backup. So, this poses a bit of a security/compliance problem for some of you DBA folk: if you want to take a full backup of a database after you’ve purged sensitive data, you should rebuild all of your indexes (with FILLFACTOR set to 100%). But the empty space on your data file(s) might still contain sensitive data! A SHRINKFILE might help get rid of that (not so) empty space, but that might not be the end of your troubles. You might _STILL_ have (not so) empty space on your files! One approach that you can follow is to export all of the data on your database to another SQL Server instance that does NOT have Instant File Initialization enabled. This can be a tedious and time-consuming process, though. So you have to weigh in your options and see what makes sense for you. Snapshot Replication is another idea that comes to mind. 2) Does Compression get rid of ghost records (2008)? The answer to this is no. The Ghost Records/Ghost Cleanup Task mechanism is alive and well on compressed tables and indexes. You can prove this running a simple script: CREATE DATABASE GhostRecordsTest GO USE GhostRecordsTest GO CREATE TABLE myTable (myPrimaryKey int IDENTITY(1,1) PRIMARY KEY CLUSTERED,                       myWideColumn varchar(1000) NOT NULL DEFAULT 'Default string value')                         ALTER TABLE myTable REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = PAGE) GO INSERT INTO myTable DEFAULT VALUES GO 10 DELETE myTable WHERE myPrimaryKey % 2 = 0 DBCC TRACEON(2514) DBCC CHECKTABLE(myTable) TraceFlag 2514 will make DBCC CHECKTABLE give you an extra tidbit of information on its output. For the above script: “Ghost Record count = 5” Until next time,   -Argenis

    Read the article

  • CodePlex Daily Summary for Sunday, February 20, 2011

    CodePlex Daily Summary for Sunday, February 20, 2011Popular ReleasesView Layout Replicator for Microsoft Dynamics CRM 2011: View Layout Replicator (1.0.119.18): Initial releaseWindows Phone 7 Isolated Storage Explorer: WP7 Isolated Storage Explorer v1.0 Beta: Current release features:WPF desktop explorer client Visual Studio integrated tool window explorer client (Visual Studio 2010 Professional and above) Supported operations: Refresh (isolated storage information), Add Folder, Add Existing Item, Download File, Delete Folder, Delete File Explorer supports operations running on multiple remote applications at the same time Explorer detects application disconnect (1-2 second delay) Explorer confirms operation completed status Explorer d...Advanced Explorer for Wp7: Advanced Explorer for Wp7 Version 1.4 Test8: Added option to run under Lockscreen. Fixed a bug when you open a pdf/mobi file without starting adobe reader/amazon kindle first boost loading time for folders added \Windows directory (all devices) you can now interact with the filesystem while it is loading!Game Files Open - Map Editor: Game Files Open - Map Editor Beta 2 v1.0.0.0: The 2° beta release of the Map Editor, we have fixed a big bug of the files regen.Document.Editor: 2011.6: Whats new for Document.Editor 2011.6: New Left to Right and Left to Right support New Indent more/less support Improved Home tab Improved Tooltips/shortcut keys Minor Bug Fix's, improvements and speed upsCatel - WPF and Silverlight MVVM library: 1.2: Catel history ============= (+) Added (*) Changed (-) Removed (x) Error / bug (fix) For more information about issues or new feature requests, please visit: http://catel.codeplex.com =========== Version 1.2 =========== Release date: ============= 2011/02/17 Added/fixed: ============ (+) DataObjectBase now supports Isolated Storage out of the box: Person.Save(myStream) stores a whole object graph in Silverlight (+) DataObjectBase can now be converted to Json via Person.ToJson(); (+)...??????????: All-In-One Code Framework ??? 2011-02-18: ?????All-In-One Code Framework?2011??????????!!http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 ?????,?????AzureBingMaps??????,??Azure,WCF, Silverlight, Window Phone????????,????????????????????????。 ???: Windows Azure SQL Azure Windows Azure AppFabric Windows Live Messenger Connect Bing Maps ?????: ??????HTML??? ??Windows PC?Mac?Silverlight??? ??Windows Phone?Silverlight??? ?????:http://blog.csdn.net/sjb5201/archive/2011...Image.Viewer: 2011: First version of 2011Silverlight Toolkit: Silverlight for Windows Phone Toolkit - Feb 2011: Silverlight for Windows Phone Toolkit OverviewSilverlight for Windows Phone Toolkit offers developers additional controls for Windows Phone application development, designed to match the rich user experience of the Windows Phone 7. Suggestions? Features? Questions? Ask questions in the Create.msdn.com forum. Add bugs or feature requests to the Issue Tracker. Help us shape the Silverlight Toolkit with your feedback! Please clearly indicate that the work items and issues are for the phone t...VsTortoise - a TortoiseSVN add-in for Microsoft Visual Studio: VsTortoise Build 29 Beta: Note: This release does not work with custom VsTortoise toolbars. These get removed every time when you shutdown Visual Studio. (#7940) Build 29 (beta)New: Added VsTortoise Solution Explorer integration for Web Project Folder, Web Folder and Web Item. Fix: TortoiseProc was called with invalid parameters, when using TSVN 1.4.x or older #7338 (thanks psifive) Fix: Add-in does not work, when "TortoiseSVN/bin" is not added to PATH environment variable #7357 Fix: Missing error message when ...Sense/Net CMS - Enterprise Content Management: SenseNet 6.0.3 Community Edition: Sense/Net 6.0.3 Community Edition We are happy to introduce you the latest version of Sense/Net with integrated ECM Workflow capabilities! In the past weeks we have been working hard to migrate the product to .Net 4 and include a workflow framework in Sense/Net built upon Windows Workflow Foundation 4. This brand new feature enables developers to define and develop workflows, and supports users when building and setting up complicated business processes involving content creation and response...thinktecture WSCF.blue: WSCF.blue V1 Update (1.0.11): Features Added a new option that allows properties on data contract types to be marked as virtual. Bug Fixes Fixed a bug caused by certain project properties not being available on Web Service Software Factory projects. Fixed a bug that could result in the WrapperName value of the MessageContractAttribute being incorrect when the Adjust Casing option is used. The menu item code now caters for CommandBar instances that are not available. For example the Web Item CommandBar does not exist ...Terminals: Version 2 - RC1: The "Clean Install" will overwrite your log4net configuration (if you have one). If you run in a Portable Environment, you can use the "Clean Install" and target your portable folder. Tested and it works fine. Changes for this release: Re-worked on the Toolstip settings are done, just to avoid the vs.net clash with auto-generating files for .settings files. renamed it to .settings.config Packged both log4net and ToolStripSettings files into the installer Upgraded the version inform...SQL Server Process Info Log: Release: releaseAllNewsManager.NET: AllNewsManager.NET 1.3: AllNewsManager.NET 1.3. This new version provide several new features, improvements and bug fixes. Some new features: Online Users. Avatars. Copy function (to create a new article from another one). SEO improvements (friendly urls). New admin buttons. And more...Facebook Graph Toolkit: Facebook Graph Toolkit 0.8: Version 0.8 (15 Feb 2011)moved to Beta stage publish photo feature "email" field of User object added new Graph Api object: Group, Event new Graph Api connection: likes, groups, eventsDJME - The jQuery extensions for ASP.NET MVC: DJME2 -The jQuery extensions for ASP.NET MVC beta2: The source code and runtime library for DJME2. For more product info you can goto http://www.dotnetage.com/djme.html What is new ?The Grid extension added The ModelBinder added which helping you create Bindable data Action. The DnaFor() control factory added that enabled Model bindable extensions. Enhance the ListBox , ComboBox data binding.Jint - Javascript Interpreter for .NET: Jint - 0.9.0: New CLR interoperability features Many bugfixesBuild Version Increment Add-In Visual Studio: Build Version Increment v2.4.11046.2045: v2.4.11046.2045 Fixes and/or Improvements:Major: Added complete support for VC projects including .vcxproj & .vcproj. All padding issues fixed. A project's assembly versions are only changed if the project has been modified. Minor Order of versioning style values is now according to their respective positions in the attributes i.e. Major, Minor, Build, Revision. Fixed issue with global variable storage with some projects. Fixed issue where if a project item's file does not exist, a ...Coding4Fun Tools: Coding4Fun.Phone.Toolkit v1.1: Coding4Fun.Phone.Toolkit v1.1 release. Bug fixes and minor feature requests addedNew Projects.NET Core Audio APIs: This project provides a lightweight set of .NET wrappers around each section of the Windows Core Audio APIs.Akwen: Akwen aspirantbigserve: made in load sof languages a serve ralternative to iis and apacheCathedral game framework: (Will eventually be) A framework for creating 2d region-based games in Silverlight.Gmail Like File Uploader In .NET: Many times I wounder, How does gmail uploads files and monitor the upload status. I suppose many of you also have gone through the same experience. Most of you knows that the Choose File button on the Gmail Page is actually a flash button but how about the progressbar? AJAX rightinicomInterface: ????ini???????COM??iOnlineExamCenter - Trung Tâm Thi Tr?c Tuy?n: Tìm hi?u các phuong pháp thi tr?c nghi?m . Áp d?ng xây d?ng trung tâm thi tr?c tuy?n. ?ng d?ng du?c xây d?ng d?a trên n?n t?ng - .NET Framework v4.0 - ASP.NET MVC v3 - Entity Framework v4 - Silverlight v4 - jQueryLee Utility Library: Lee Utility LibrarySocialCore: Social Core is a framework for developing Social applications using a distributed infrastructure. The focus of Social Core is on collaboration, security and transparency. Privacy is not dead, it just is not implemented correctly.TestAmazonCloud: tacTftp.Net: This projects implements the TFTP (Trivial File Transfer) protocol for .NET in an easy-to-use library. It allows you to integrate TFTP client and server functionality into your project. It is s table, unit-tested and comes with a sample TFTP client and server.Using the Repository Pattern and DI to switch between SQL and XML: This is an example application, showing how Dependency Injection and the Repository can be used to allow interchangeable persistence between XML and SQLWebForms Razor converter - WinForms: Telerik developed a command line "WebForms to Razor (CSHTML) conversion tool" and uploaded in git -https://github.com/telerik/razor-converter This project uses the same Telerik API, but the client is the GUI/WinForms and allow developer to convert multiple files in a one go.XNA and Dependency Injection: Test code sample for XNA and Dependency Injection.XNA and IoC Container: Test code sample for XNA and IoC Container.XNA and Unit Testing: Test code sample for XNA and Unit Testing.Xplore World: Xplore World es un navegador web avanzado disponible para Windows Xp/Vista/7 y actualmente solo está disponible en español. Con este navegador puedes hacer cosas maravillosas como ver imágenes de cualquier web a tu estilo, administrar los usuarios, modificar tus marcadores, etc.ZO (ZeroOne): ZeroOne is the editor for programmers who think in binary.

    Read the article

  • ImageMagick failing to convert to JPG

    - by johnui
    Hi: We recently installed the latest version of ImageMagick onto our Linux server. I seem to be having issues performing the most basic of tasks. I am running this command line: /usr/bin/convert /location/to/source/design.ai /location/to/save/output.jpg Unfortunatly is saves design.jpg as an illustrator file (if I rename the file to output.ai it opens). Even if I do this: /usr/bin/convert /location/to/source/design.ai -rotate 90 /location/to/save/design.jpg It rotates the file and saves again as an illustrator document. This happens with all filetypes (e.g. png, bmp, etc...) It appears ImageMagick cannot figure out what I want it converted to and just saves as the same file type. Any ideas on fixing this? Regards: John

    Read the article

  • help! corrupt file recovery

    - by TheBumpper
    My supervisor computer crashed last night, and I'm trying to help him out. He made an R script but when he tried to open it, it was empty. But for some reason the file is 7.9kb so it should not be empty i think... anyway when i tried to open it, Gedit gave this error: "The file you opened has some invalid characters. If you continue editing this file you could corrupt this document. You can also choose another character encoding and try again." and the options to encode the characters. It looked like this(with a red background): \00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\ My question is, is there a way to restore the file? i hope someone has a brilliant idea

    Read the article

  • Install Domain Controller – Part1 of build my own development SharePoint2010 Farm

    - by ybbest
    As the memory become really cheap now, a couple days ago I have updated my laptop memory to 12g. Plus I got my old desktop ,now I decide to build my own SharePoint farm at home. I decide to document the steps to build a simple SharePoint farm. I will use windows server 2008 r2 and VMware. In the first part of this series of building my own SharePoint farm. I will create my domain controller. Here are the steps to install it: Open the command line by going to run and type CMD and then type dcpromo in the command line. The AD Installation wizard will prompt and click next. 2. Click next as shown in the screenshot.   3. Select creates a new domain in a new forest and click next.      4. Type a domain name (e.g. ybbest.com) and click next. 5.In my case , I select Windows Server 2008 R2 forest Functional level and click next 6. Leave the default and click next.(If you have not make a static IP address , you need to do so now)      7.You might get scary prompt like the screenshot below , just ignore the message and click Yes.     8.Leave the default settings and click Next  9.Type a password when you need to restore your Domain        10.Click Next and restart your computer ,this will install your Domain Controller.

    Read the article

  • SharePoint 2010 Replaceable Parameter, some observations…

    - by svdoever
    SharePoint Tools for Visual Studio 2010 provides a rudimentary mechanism for replaceable parameters that you can use in files that are not compiled, like ascx files and your project property settings. The basics on this can be found in the documentation at http://msdn.microsoft.com/en-us/library/ee231545.aspx. There are some quirks however. For example: My Package name is MacawMastSP2010Templates, as defined in my Package properties: I want to use the $SharePoint.Package.Name$ replaceable parameter in my feature properties. But this parameter does not work in the “Deployment Path” property, while other parameters work there, while it works in the “Image Url” property. It just does not get expanded. So I had to resort to explicitly naming the first path of the deployment path: : You also see a special property for the “Receiver Class” in the format $SharePoint.Type.<GUID>.FullName$. The documentation gives the following description:The full name of the type matching the GUID in the token. The format of the GUID is lowercase and corresponds to the Guid.ToString(“D”) format (that is, xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx). Not very clear. After some searching it happened to be the guid as declared in my feature receiver code: In other properties you see a different set of replaceable parameters: We use a similar mechanism for replaceable parameter for years in our Macaw Solutions Factory for SharePoint 2007 development, where each replaceable parameter is a PowerShell function. This provides so much more power. For example in a feature declaration we can say: Code Snippet <?xml version="1.0" encoding="utf-8" ?> <!-- Template expansion      [[ProductDependency]] -> Wss3 or Moss2007      [[FeatureReceiverAssemblySignature]] -> for example: Macaw.Mast.Wss3.Templates.SharePoint.Features, Version=1.0.0.0, Culture=neutral, PublicKeyToken=6e9d15db2e2a0be5      [[FeatureReceiverClass]] -> for example: Macaw.Mast.Wss3.Templates.SharePoint.Features.SampleFeature.FeatureReceiver.SampleFeatureFeatureReceiver --> <Feature Id="[[$Feature.SampleFeature.ID]]"   Title="MAST [[$MastSolutionName]] Sample Feature"   Description="The MAST [[$MastSolutionName]] Sample Feature, where all possible elements in a feature are showcased"   Version="1.0.0.0"   Scope="Site"   Hidden="FALSE"   ImageUrl="[[FeatureImage]]"   ReceiverAssembly="[[FeatureReceiverAssemblySignature]]"   ReceiverClass="[[FeatureReceiverClass]]"   xmlns="http://schemas.microsoft.com/sharepoint/">     <ElementManifests>         <ElementManifest Location="ExampleCustomActions.xml" />         <ElementManifest Location="ExampleSiteColumns.xml" />         <ElementManifest Location="ExampleContentTypes.xml" />         <ElementManifest Location="ExampleDocLib.xml" />         <ElementManifest Location="ExampleMasterPages.xml" />           <!-- Element files -->         [[GenerateXmlNodesForFiles -path 'ExampleDocLib\*.*' -node 'ElementFile' -attributes @{Location = { RelativePathToExpansionSourceFile -path $_ }}]]         [[GenerateXmlNodesForFiles -path 'ExampleMasterPages\*.*' -node 'ElementFile' -attributes @{Location = { RelativePathToExpansionSourceFile -path $_ }}]]         [[GenerateXmlNodesForFiles -path 'Resources\*.resx' -node 'ElementFile' -attributes @{Location = { RelativePathToExpansionSourceFile -path $_ }}]]     </ElementManifests> </Feature> We have a solution level PowerShell script file named TemplateExpansionConfiguration.ps1 where we declare our variables (starting with a $) and include helper functions: Code Snippet # ============================================================================================== # NAME: product:\src\Wss3\Templates\TemplateExpansionConfiguration.ps1 # # AUTHOR: Serge van den Oever, Macaw # DATE  : May 24, 2007 # # COMMENT: # Nota bene: define variable and function definitions global to be visible during template expansion. # # ============================================================================================== Set-PSDebug -strict -trace 0 #variables must have value before usage $global:ErrorActionPreference = 'Stop' # Stop on errors $global:VerbosePreference = 'Continue' # set to SilentlyContinue to get no verbose output   # Load template expansion utility functions . product:\tools\Wss3\MastDeploy\TemplateExpansionUtil.ps1   # If exists add solution expansion utility functions $solutionTemplateExpansionUtilFile = $MastSolutionDir + "\TemplateExpansionUtil.ps1" if ((Test-Path -Path $solutionTemplateExpansionUtilFile)) {     . $solutionTemplateExpansionUtilFile } # ==============================================================================================   # Expected: $Solution.ID; Unique GUID value identifying the solution (DON'T INCLUDE BRACKETS). # function: guid:UpperCaseWithoutCurlies -guid '{...}' ensures correct syntax $global:Solution = @{     ID = GuidUpperCaseWithoutCurlies -guid '{d366ced4-0b98-4fa8-b256-c5a35bcbc98b}'; }   #  DON'T INCLUDE BRACKETS for feature id's!!! # function: GuidUpperCaseWithoutCurlies -guid '{...}' ensures correct syntax $global:Feature = @{     SampleFeature = @{         ID = GuidUpperCaseWithoutCurlies -guid '{35de59f4-0c8e-405e-b760-15234fe6885c}';     } }   $global:SiteDefinition = @{     TemplateBlankSite = @{         ID = '12346';     } }   # To inherit from this content type add the delimiter (00) and then your own guid # ID: <base>00<newguid> $global:ContentType = @{     ExampleContentType = @{         ID = '0x01008e5e167ba2db4bfeb3810c4a7ff72913';     } }   #  INCLUDE BRACKETS for column id's and make them LOWER CASE!!! # function: GuidLowerCaseWithCurlies -guid '{...}' ensures correct syntax $global:SiteColumn = @{     ExampleChoiceField = @{         ID = GuidLowerCaseWithCurlies -guid '{69d38ce4-2771-43b4-a861-f14247885fe9}';     };     ExampleBooleanField = @{         ID = GuidLowerCaseWithCurlies -guid '{76f794e6-f7bd-490e-a53e-07efdf967169}';     };     ExampleDateTimeField = @{         ID = GuidLowerCaseWithCurlies -guid '{6f176e6e-22d2-453a-8dad-8ab17ac12387}';     };     ExampleNumberField = @{         ID = GuidLowerCaseWithCurlies -guid '{6026947f-f102-436b-abfd-fece49495788}';     };     ExampleTextField = @{         ID = GuidLowerCaseWithCurlies -guid '{23ca1c29-5ef0-4b3d-93cd-0d1d2b6ddbde}';     };     ExampleUserField = @{         ID = GuidLowerCaseWithCurlies -guid '{ee55b9f1-7b7c-4a7e-9892-3e35729bb1a5}';     };     ExampleNoteField = @{         ID = GuidLowerCaseWithCurlies -guid '{f9aa8da3-1f30-48a6-a0af-aa0a643d9ed4}';     }; } This gives so much more possibilities, like for example the elements file expansion where a PowerShell function iterates through a folder and generates the required XML nodes. I think I will bring back this mechanism, so it can work together with the built-in replaceable parameters, there are hooks to define you custom replacements as described by Waldek in this blog post.

    Read the article

  • bash command for each file in a folder

    - by Robert
    I have a set of files on which I would like to apply the same command and the output should contain the same name as the processed file but with a different extension. Currently I am doing rename /my/data/Andrew.doc to /my/data/Andrew.txt I would like to do this for all the .doc files from the /my/data/ folder and to preserve the name. I tried several versions but I guess I have something wrong in the syntax as I an new to linux.

    Read the article

  • iptables syn flood countermeasure

    - by Penegal
    I'm trying to adjust my iptables firewall to increase the security of my server, and I found something a bit problematic here : I have to set INPUT policy to ACCEPT and, in addition, to have a rule saying iptables -I INPUT -i eth0 -j ACCEPT. Here comes my script (launched manually for tests) : #!/bin/sh IPT=/sbin/iptables echo "Clearing firewall rules" $IPT -F $IPT -Z $IPT -t nat -F $IPT -t nat -Z $IPT -t mangle -F $IPT -t mangle -Z $IPT -X echo "Defining logging policy for dropped packets" $IPT -N LOGDROP $IPT -A LOGDROP -j LOG -m limit --limit 5/min --log-level debug --log-prefix "iptables rejected: " $IPT -A LOGDROP -j DROP echo "Setting firewall policy" $IPT -P INPUT DROP # Deny all incoming connections $IPT -P OUTPUT ACCEPT # Allow all outgoing connections $IPT -P FORWARD DROP # Deny all forwaring echo "Allowing connections from/to lo and incoming connections from eth0" $IPT -I INPUT -i lo -j ACCEPT $IPT -I OUTPUT -o lo -j ACCEPT #$IPT -I INPUT -i eth0 -j ACCEPT echo "Setting SYN flood countermeasures" $IPT -A INPUT -p tcp -i eth0 --syn -m limit --limit 100/second --limit-burst 200 -j LOGDROP echo "Allowing outgoing traffic corresponding to already initiated connections" $IPT -A OUTPUT -p ALL -m state --state ESTABLISHED,RELATED -j ACCEPT echo "Allowing incoming SSH" $IPT -A INPUT -p tcp --dport 22 -m state --state NEW -m recent --set --name SSH -j ACCEPT echo "Setting SSH bruteforce attacks countermeasures (deny more than 10 connections every 10 minutes)" $IPT -A INPUT -p tcp --dport 22 -m recent --update --seconds 600 --hitcount 10 --rttl --name SSH -j LOGDROP echo "Allowing incoming traffic for HTTP, SMTP, NTP, PgSQL and SolR" $IPT -A INPUT -p tcp --dport 25 -i eth0 -j ACCEPT $IPT -A INPUT -p tcp --dport 80 -i eth0 -j ACCEPT $IPT -A INPUT -p udp --dport 123 -i eth0 -j ACCEPT $IPT -A INPUT -p tcp --dport 5433 -i eth0.2654 -s 172.16.0.2 -j ACCEPT $IPT -A INPUT -p udp --dport 5433 -i eth0.2654 -s 172.16.0.2 -j ACCEPT $IPT -A INPUT -p tcp --dport 8983 -i eth0.2654 -s 172.16.0.2 -j ACCEPT $IPT -A INPUT -p udp --dport 8983 -i eth0.2654 -s 172.16.0.2 -j ACCEPT echo "Allowing outgoing traffic for ICMP, SSH, whois, SMTP, DNS, HTTP, PgSQL and SolR" $IPT -A OUTPUT -p tcp --dport 22 -j ACCEPT $IPT -A OUTPUT -p tcp --dport 25 -o eth0 -j ACCEPT $IPT -A OUTPUT -p tcp --dport 43 -o eth0 -j ACCEPT $IPT -A OUTPUT -p tcp --dport 53 -o eth0 -j ACCEPT $IPT -A OUTPUT -p udp --dport 53 -o eth0 -j ACCEPT $IPT -A OUTPUT -p tcp --dport 80 -o eth0 -j ACCEPT $IPT -A OUTPUT -p udp --dport 80 -o eth0 -j ACCEPT #$IPT -A OUTPUT -p tcp --dport 5433 -o eth0 -d 176.31.236.101 -j ACCEPT #$IPT -A OUTPUT -p udp --dport 5433 -o eth0 -d 176.31.236.101 -j ACCEPT #$IPT -A OUTPUT -p tcp --dport 8983 -o eth0 -d 176.31.236.101 -j ACCEPT #$IPT -A OUTPUT -p udp --dport 8983 -o eth0 -d 176.31.236.101 -j ACCEPT $IPT -A OUTPUT -p tcp --sport 5433 -o eth0.2654 -j ACCEPT $IPT -A OUTPUT -p udp --sport 5433 -o eth0.2654 -j ACCEPT $IPT -A OUTPUT -p tcp --sport 8983 -o eth0.2654 -j ACCEPT $IPT -A OUTPUT -p udp --sport 8983 -o eth0.2654 -j ACCEPT $IPT -A OUTPUT -p icmp -j ACCEPT echo "Allowing outgoing FTP backup" $IPT -A OUTPUT -p tcp --dport 20:21 -o eth0 -d 91.121.190.78 -j ACCEPT echo "Dropping and logging everything else" $IPT -A INPUT -s 0/0 -j LOGDROP $IPT -A OUTPUT -j LOGDROP $IPT -A FORWARD -j LOGDROP echo "Firewall loaded." echo "Maintaining new rules for 3 minutes for tests" sleep 180 $IPT -nvL echo "Clearing firewall rules" $IPT -F $IPT -Z $IPT -t nat -F $IPT -t nat -Z $IPT -t mangle -F $IPT -t mangle -Z $IPT -X $IPT -P INPUT ACCEPT $IPT -P OUTPUT ACCEPT $IPT -P FORWARD ACCEPT When I launch this script (I only have a SSH access), the shell displays every message up to Maintaining new rules for 3 minutes for tests, the server is unresponsive during the 3 minutes delay and then resume normal operations. The only solution I found until now was to set $IPT -P INPUT ACCEPT and $IPT -I INPUT -i eth0 -j ACCEPT, but this configuration does not protect me of any attack, which is a great shame for a firewall. I suspect that the error comes from my script and not from iptables, but I don't understand what's wrong with my script. Could some do-gooder explain me my error, please? EDIT: here comes the result of iptables -nvL with the "accept all input" ($IPT -P INPUT ACCEPT and $IPT -I INPUT -i eth0 -j ACCEPT) solution : Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 1 52 ACCEPT all -- eth0 * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 0 0 LOGDROP tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp flags:0x17/0x02 limit: avg 100/sec burst 200 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 state NEW recent: SET name: SSH side: source 0 0 LOGDROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 recent: UPDATE seconds: 600 hit_count: 10 TTL-Match name: SSH side: source 0 0 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 0 0 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 0 0 ACCEPT udp -- eth0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:123 0 0 ACCEPT tcp -- eth0.2654 * 172.16.0.2 0.0.0.0/0 tcp dpt:5433 0 0 ACCEPT udp -- eth0.2654 * 172.16.0.2 0.0.0.0/0 udp dpt:5433 0 0 ACCEPT tcp -- eth0.2654 * 172.16.0.2 0.0.0.0/0 tcp dpt:8983 0 0 ACCEPT udp -- eth0.2654 * 172.16.0.2 0.0.0.0/0 udp dpt:8983 0 0 LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- * lo 0.0.0.0/0 0.0.0.0/0 2 728 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:43 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 0 0 ACCEPT udp -- * eth0 0.0.0.0/0 0.0.0.0/0 udp dpt:53 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 0 0 ACCEPT udp -- * eth0 0.0.0.0/0 0.0.0.0/0 udp dpt:80 0 0 ACCEPT tcp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 tcp spt:5433 0 0 ACCEPT udp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 udp spt:5433 0 0 ACCEPT tcp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 tcp spt:8983 0 0 ACCEPT udp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 udp spt:8983 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 91.121.190.78 tcp dpts:20:21 0 0 LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain LOGDROP (5 references) pkts bytes target prot opt in out source destination 0 0 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 5/min burst 5 LOG flags 0 level 7 prefix `iptables rejected: ' 0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 EDIT #2 : I modified my script (policy ACCEPT, defining authorized incoming packets then logging and dropping everything else) to write iptables -nvL results to a file and to allow only 10 ICMP requests per second, logging and dropping everything else. The result proved unexpected : while the server was unavailable to SSH connections, even already established, I ping-flooded it from another server, and the ping rate was restricted to 10 requests per second. During this test, I also tried to open new SSH connections, which remained unanswered until the script flushed rules. Here comes the iptables stats written after these tests : Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 600 35520 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 6 360 LOGDROP tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp flags:0x17/0x02 limit: avg 100/sec burst 200 0 0 LOGDROP tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 STRING match "w00tw00t.at.ISC.SANS." ALGO name bm TO 65535 0 0 LOGDROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 STRING match "Host: anoticiapb.com.br" ALGO name bm TO 65535 0 0 LOGDROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 STRING match "Host: www.anoticiapb.com.br" ALGO name bm TO 65535 105 8820 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 10/sec burst 5 830 69720 LOGDROP icmp -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 state NEW recent: SET name: SSH side: source 0 0 LOGDROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 recent: UPDATE seconds: 600 hit_count: 10 TTL-Match name: SSH side: source 0 0 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 0 0 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 0 0 ACCEPT udp -- eth0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:80 0 0 ACCEPT udp -- eth0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:123 0 0 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 0 0 ACCEPT tcp -- eth0.2654 * 172.16.0.1 0.0.0.0/0 tcp spt:5433 0 0 ACCEPT udp -- eth0.2654 * 172.16.0.1 0.0.0.0/0 udp spt:5433 0 0 ACCEPT tcp -- eth0.2654 * 172.16.0.1 0.0.0.0/0 tcp spt:8983 0 0 ACCEPT udp -- eth0.2654 * 172.16.0.1 0.0.0.0/0 udp spt:8983 16 1684 LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 600 35520 ACCEPT all -- * lo 0.0.0.0/0 0.0.0.0/0 0 0 LOGDROP tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 owner UID match 33 0 0 LOGDROP udp -- * eth0 0.0.0.0/0 0.0.0.0/0 udp dpt:80 owner UID match 33 116 11136 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 0 0 ACCEPT udp -- * eth0 0.0.0.0/0 0.0.0.0/0 udp dpt:53 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 0 0 ACCEPT udp -- * eth0 0.0.0.0/0 0.0.0.0/0 udp dpt:80 0 0 ACCEPT tcp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 tcp dpt:5433 0 0 ACCEPT udp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 udp dpt:5433 0 0 ACCEPT tcp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 tcp dpt:8983 0 0 ACCEPT udp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 udp dpt:8983 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:43 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 91.121.190.18 tcp dpts:20:21 7 1249 LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain LOGDROP (11 references) pkts bytes target prot opt in out source destination 35 3156 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 1/sec burst 5 LOG flags 0 level 7 prefix `iptables rejected: ' 859 73013 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 Here comes the log content added during this test : Mar 28 09:52:51 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=52 TOS=0x00 PREC=0x00 TTL=51 ID=55666 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:51 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=52 TOS=0x00 PREC=0x00 TTL=51 ID=55667 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:51 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=64 TOS=0x00 PREC=0x00 TTL=51 ID=55668 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:51 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=64 TOS=0x00 PREC=0x00 TTL=51 ID=55669 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:52 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=64 TOS=0x00 PREC=0x00 TTL=51 ID=55670 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:54 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=64 TOS=0x00 PREC=0x00 TTL=51 ID=55671 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:58 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=64 TOS=0x00 PREC=0x00 TTL=51 ID=55672 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:59 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=6 Mar 28 09:52:59 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=7 Mar 28 09:52:59 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=8 Mar 28 09:52:59 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=9 Mar 28 09:52:59 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=59 Mar 28 09:53:00 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=152 Mar 28 09:53:01 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=246 Mar 28 09:53:02 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=339 Mar 28 09:53:03 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=432 Mar 28 09:53:04 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=524 Mar 28 09:53:05 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=617 Mar 28 09:53:06 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=711 Mar 28 09:53:07 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=804 Mar 28 09:53:08 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=897 Mar 28 09:53:16 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:c0:62:6b:e3:5c:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=61402 DF PROTO=TCP SPT=57637 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0 Mar 28 09:53:19 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:c0:62:6b:e3:5c:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=61403 DF PROTO=TCP SPT=57637 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0 Mar 28 09:53:21 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=64 TOS=0x00 PREC=0x00 TTL=51 ID=55674 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:53:25 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:c0:62:6b:e3:5c:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=61404 DF PROTO=TCP SPT=57637 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0 Mar 28 09:53:37 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=116 TOS=0x00 PREC=0x00 TTL=51 ID=55675 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:37 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=116 TOS=0x00 PREC=0x00 TTL=51 ID=55676 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:37 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=180 TOS=0x00 PREC=0x00 TTL=51 ID=55677 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:38 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=180 TOS=0x00 PREC=0x00 TTL=51 ID=55678 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:39 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=180 TOS=0x00 PREC=0x00 TTL=51 ID=55679 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:39 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:c0:62:6b:e3:5c:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=5055 DF PROTO=TCP SPT=57638 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0 Mar 28 09:53:41 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=180 TOS=0x00 PREC=0x00 TTL=51 ID=55680 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:42 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:c0:62:6b:e3:5c:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=5056 DF PROTO=TCP SPT=57638 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0 Mar 28 09:53:45 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=180 TOS=0x00 PREC=0x00 TTL=51 ID=55681 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:48 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:c0:62:6b:e3:5c:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=5057 DF PROTO=TCP SPT=57638 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0 If I correctly interpreted these results, they say that ICMP rules were correctly interpreted by iptables, but SSH rules were not. This does not make any sense... Does somebody understand where my error comes from? EDIT #3 : After some more tests, I found out that commenting the SYN flood countermeasure removes the problem. I continue researches in this way but, meanwhile, if somebody sees my anti SYN flood rule error...

    Read the article

  • Work Item Visualizer for TFS 2010 - New Extension

    - by MikeParks
    I released another new extension to the Visual Studio Gallery again today called Work Item Visualizer for TFS 2010. I've only heard positive things about it so far, hopefully it stays that way :) Basically, it creates a diagram of all work items linked to a work item ID which the user specifies in a search box. This extension was coded using DGML (the same graph rendering language used for the Visual Studio 2010 Architecture Tools). It was pretty cool getting a chance to create something using some of the newest technology out there. Well, I just wanted to throw a blog up to get the word out on it a little more. If you're using Visual Studio 2010 with Team Foundation Server 2010, feel free to check it out! Thanks everyone. Download Link: http://visualstudiogallery.msdn.microsoft.com/en-us/a35b6010-750b-47f6-a7a5-41f0fa7294d2   What it does: ·         Creates a DGML graph to visualize linked TFS Work Items by entering a Work Item ID in the toolbar search box   How it benefits you: ·         Allows you to easily analyze the hierarchy of your TFS Work Items ·         Gain the ability to perform basic risk/impact analysis when creating or editing Work Items ·         Great for meetings in the case that you need to discuss the entire scope of linked Work Items ·         Easier project planning ·         Eliminates the need to create TFS queries or reports to view tree of Work Items ·         Easily lets you see the entire tree of work items linked to the one you’re working on   Navigation Tips: ·         Use Ctrl + Mouse Wheel Scroll to zoom in and out ·         Use Ctrl + Left Mouse click (and hold) to move document around ·         Right click on DGML area for more options (Like copy image or viewing in groups) ·         Clicking on each node highlights that node and the links connected to it ·         Colors in the legend can be changed ·         When work item nodes are deleted, the view is automatically updated ·         Double clicking on work item node will open up the Work Items URL   Try it out on work items that have several of links and let us know what you think. A big thanks goes out to everyone working on the http://visualization.codeplex.com/ project for publishing the source code on CodePlex which really helped me learn how DGML (Directed Graph Markup Language - New to Visual Studio 2010 Architecture Tools) works!    - Mike

    Read the article

  • How to Use Breaks in Microsoft Word to Better Format Your Documents

    - by Matthew Guay
    Have you ever struggled to get the formatting of a long document looking like you want in each section?  Let’s explore the Breaks tool in Word and see how you can use breaks to get your documents formatted better. Word includes so many features, it’s easy to overlook some that can be the exact thing we’re looking for.  Most of us have used Page Breaks in Word, but Word also includes several other breaks to help your format your documents.  Let’s look at each break and see how you can use them in your documents Latest Features How-To Geek ETC The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Our Favorite Tech: What We’re Thankful For at How-To Geek Settle into Orbit with the Voyage Theme for Chrome and Iron Awesome Safari Compass Icons Set Escape from the Exploding Planet Wallpaper Move Your Tumblr Blog to WordPress Pytask is an Easy to Use To-Do List Manager for Your Ubuntu System Snowy Christmas House Personas Theme for Firefox

    Read the article

  • Essbase 11.1.2 - JVM_OPTION settings for Essbase

    - by sujata
    When tuning the heap size for Essbase, there are two JVM_OPTIONS settings available for Essbase - one for the Essbase agent and one for the Essbase applications that are using custom-defined functions (CDFs), custom-defined macros (CDMs), data mining, triggers or external authentication. ESS_JVM_OPTION setting is used for the application and mainly for CDFs, CDMs, data mining, triggers, external authentication ESS_CSS_JVM_OPTION setting is used to set the heap size for the Essbase agent

    Read the article

  • What’s the password for my FileVault 2 boot volume?

    - by cbowns
    Apple’s support document for FileVault 2 (a.k.a. “full disk encryption” or “FDE”) has lots of information about enabling FDE and what it means for booting the machine. However, it doesn’t cover one very important thing I’m trying to do: mount the drive in the Recovery HD environment to reinstall OS X on it. The Recovery HD environment asks me for the volume passphrase so it can mount my drive and try to install OS X onto it. If this were an external drive which I’d manually enabled FDE on with diskutil, or an external Time Machine volume, I’d know it because it makes you pick one (just like a regular login password), but FileVault 2 never asked me for a volume passphrase (I assume it selects one behind the scenes). I’ve tried my main user’s password, but that doesn’t work, and neither does the recovery key set for the volume. Keychain Access doesn’t have anything that I could find. How do I unlock this volume?

    Read the article

  • build a Database from Ms Word list information...

    - by Jayron Soares
    Please someone can advise me how to approach a given problem: I have a sequential list of metadata in a document in MS Word. The basic idea is create a python algorithm to iterate over of the information, retrieving just the name of PROCESS, when is made a queue, from a database. for example. Process: Process Walker (1965) Exact reference: Walker Process Equipment., nc. v. Food Machinery Corp.. Link: http://caselaw.lp.findlaw.com/scripts/getcase.pl?court=US&vol=382&invol= Type of procedure: Certiorari To The United States Court of Appeals for the SeventhCircuit. Parties: Walker Process Equipment, Inc. Sector: Systems is … Start Date: October 12-13 Arguedas, 1965 Summary: Food Machinery Company has initiated a process to stop or slow the entry of competitors through the use of a patent obtained by fraud. The case concerned a patenton "knee ction swing diffusers" used in aeration equipment for sewage treatment systems, and the question was whether "the maintenance and enforcement of a patent obtained by fraud before the patent office" may be a basis for antitrust punishment. Report of the evolution process: petitioner, in answer to respond .. Importance: a) First case which established an analysis for the diagnosis of dispute… There are about 200 pages containing the information above. I have in mind the idea of creating an algorithm in python to be able to break this information sequenced and try to store them in a web database[open source application that I’m looking for] in order to allow for free consultations ...

    Read the article

  • Microsoft Office 2013 Issue

    - by Liz
    A few days ago I opened my microsoft office programs and discovered taht they are missing the editing icons at the top, some of them will appear if you scroll over them, but not all. Also, in PowerPoint the slides show in the side window with a red "x" I have tried to uninstall and reinstall office 2013, but I have had no luck. This issue is in every office program (excel, PP, word, access, outlook, etc). I also can't see the text when I type. Its there, I can see it when I print the document, but nothing on the screen. Does anyone have a solution for this issue??

    Read the article

  • CHM to EPUB Converter

    - by griegs
    I recently bought an iriver ebook reader but it doesn't support chm files. I have bought a bunch of O'Reilly books and with them have the soft copy as well. Some books convert quite well using calibre. However, there is one book I have that just won't convert. So I used HH to decompile it and now I have a bunch of html files. These html files contain references to images in folders. How can I now join them all into a single document which Calibre can read and properly convert into an epub or am I going about this the wrong way? I've read this question but it removes the images and the images are quite important to the content.

    Read the article

  • First look at the cloud with Google App Engine and Udacity

    - by Ken Hortsch
    Udacity is free online university and offers a CS253 intro to web development class.  Since I am currently a web developer/architect in ASP.Net (and recent project has brought me back to Java) it is a bit light.  However, it does offer me a nice problem set and my first exposure to writing cloud apps with GAE and Google Datastore. Steve Huffman, who developed reddit, is the instructor and does offer nice real-life stories.  Give it a look.

    Read the article

  • Splwow64 with TS Easy Print

    - by Tim Brigham
    I have an application (Sage MIP Fund Accounting) which exports data to Excel. In this process it uses an internal print driver. Since we upgraded from 2008 to 2008 R2 this export process causes system hangs. This has been isolated down to the splwow64 executable hanging while the Excel document is building. If I kill the spwow64 executable things function properly (I just can't print it once completed). This only occurs while using printer redirection using the Remote Desktop Easy Print function - if I pull the printer redirection things work exactly as expected. I've spent the last couple hours looking at hotfixes or driver upgrades since this appears to be a problem specifically with how the Remote Desktop Easy Printer printer is functioning. Is anyone aware of a hotfix which would be applicable in this situation? I don't want to grab every hotfix for redirected printing and start throwing them out there.

    Read the article

  • Integrating F# in SharpDevelop

    - by Marko Apfel
    After installing SharpDevelop 4 the F# Interactive could not be activated. In my case the correct folder for the F# installation must by specified in die application config file. So i opened SharpDevelop.exe.config and set this entry in the appSettings section: <add key="alt_fs_bin_path" value="C:\Program Files\FSharp\bin" />

    Read the article

  • Limiting DOPs &ndash; Who rules over whom?

    - by jean-pierre.dijcks
    I've gotten a couple of questions from Dan Morgan and figured I start to answer them in this way. While Dan is running on a big system he is running with Database Resource Manager and he is trying to make sure the system doesn't go crazy (remember end user are never, ever crazy!) on very high DOPs. Q: How do I control statements with very high DOPs driven from user hints in queries? A: The best way to do this is to work with DBRM and impose limits on consumer groups. The Max DOP setting you can set in DBRM allows you to overwrite the hint. Now let's go into some more detail here. Assume my object (and for simplicity we assume there is a single object - and do remember that we always pick the highest DOP when in doubt and when conflicting DOPs are available in a query) has PARALLEL 64 as its setting. Assume that the query that selects something cool from that table lives in a consumer group with a max DOP of 32. Assume no goofy things (like running out of parallel_max_servers) are happening. A query selecting from this table will run at DOP 32 because DBRM caps the DOP. As of 11.2.0.1 we also use the DBRM cap to create the original plan (at compile time) and not just enforce the cap at runtime. Now, my user is smart and writes a query with a parallel hint requesting DOP 128. This query is still capped by DBRM and DBRM overrules the hint in the statement. The statement, despite the hint, runs at DOP 32. Note that in the hinted scenario we do compile the statement with DOP 128 (the optimizer obeys the hint). This is another reason to use table decoration rather than hints. Q: What happens if I set parallel_max_servers higher than processes (e.g. the max number of processes allowed to run on my machine)? A: Processes rules. It is important to understand that processes are fixed at startup time. If you increase parallel_max_servers above the number of processes in the processes parameter you should get a warning in the alert log stating it can not take effect. As a follow up, a hinted query requesting more parallel processes than either parallel_max_servers or processes will not be able to acquire the requested number. Parallel_max_processes will prevent this. And since parallel_max_servers should be lower than max processes you can never go over either...

    Read the article

< Previous Page | 920 921 922 923 924 925 926 927 928 929 930 931  | Next Page >