Search Results

Search found 8550 results on 342 pages for 'datetime operation'.

Page 179/342 | < Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >

  • JPanel clock and button merge

    - by user1509628
    I'm having a problem with in merging time and button together... I can't get the button show in the JPanel. This is my code: import java.awt.Color; import java.awt.Font; import java.text.DateFormat; import java.text.SimpleDateFormat; import javax.swing.JButton; import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.JPanel; public final class Date_Time extends JFrame{ private static final long serialVersionUID = 1L; private JPanel show_date_time = new JPanel(); private JLabel time = new JLabel("Time:"); private JLabel show_time = new JLabel("Show Time"); DateFormat dateFormat2 = new SimpleDateFormat("h:mm:ss a"); java.util.Date date2; private JLabel label; private JPanel panel; public Date_Time(){ this.setSize(300, 300); this.setResizable(false); getContentPane().add(Show_Time_date()); } private JButton button1 = new JButton(); private JFrame frame1 = new JFrame(); public JPanel Show_Time_date(){ frame1.add(show_date_time); show_date_time.setBackground(Color.ORANGE); frame1.add(button1); getShow_date_time().setLayout(null); Font f; f = new Font("SansSerif", Font.PLAIN, 15); getTime().setBounds(0,250,400,30); getTime().setFont(f); getShow_date_time().add(getTime()); setShow_time(new JLabel("")); updateDateTime(); getShow_time().setBounds(37,250,400,30); getShow_time().setFont(f); getShow_date_time().add(getShow_time()); return getShow_date_time(); } public static void main(String[] args) { Date_Time Main_win=new Date_Time(); Main_win.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); Main_win.setVisible(true); } public void updateDateTime() { Thread th = new Thread(new Runnable() { @Override public void run() { while(true) { date2 = new java.util.Date(); String dateTime = dateFormat2.format(date2); getShow_time().setText(dateTime); getShow_time().updateUI(); } } }); th.start(); } /** * @return the show_time */ public JLabel getShow_time() { return show_time; } /** * @param show_time the show_time to set */ public void setShow_time(JLabel show_time) { this.show_time = show_time; } /** * @return the time */ public JLabel getTime() { return time; } /** * @param time the time to set */ public void setTime(JLabel time) { this.time = time; } /** * @return the show_date_time */ public JPanel getShow_date_time() { return show_date_time; } /** * @param show_date_time the show_date_time to set */ public void setShow_date_time(JPanel show_date_time) { this.show_date_time = show_date_time; } /** * @return the label1 */ /** * @param label1 the label1 to set */ /** * @return the label */ public JLabel getLabel() { return label; } /** * @param label the label to set */ public void setLabel(JLabel label) { this.label = label; } /** * @return the panel */ public JPanel getPanel() { return panel; } /** * @param panel the panel to set */ public void setPanel(JPanel panel) { this.panel = panel; } }

    Read the article

  • How to design a data model that deals with (real) contracts?

    - by Geoffrey
    I was looking for some advice on designing a data model for contract administration. The general life cycle of a contract is thus: Contract is created and in a "draft" state. It is viewable internally and changes may be made. Contract goes out to vendor, status is set to "pending" Contract is rejected by vendor. At this state, nothing can be done to the contract. No statuses may be added to the collection. Contract is accepted by vendor. At this state, nothing can be done to the contract. No statuses may be added to the collection. I obviously want to avoid a situation where the contract is accepted and, say, the amount is changed. Here are my classes: [EnforceNoChangesAfterDraftState] public class VendorContract { public virtual Vendor Vendor { get; set; } public virtual decimal Amount { get; set; } public virtual VendorContact VendorContact { get; set; } public virtual string CreatedBy { get; set; } public virtual DateTime CreatedOn { get; set; } public virtual FileStore Contract { get; set; } public virtual IList<VendorContractStatus> ContractStatus { get; set; } } [EnforceCorrectWorkflow] public class VendorContractStatus { public virtual VendorContract VendorContract { get; set; } public virtual FileStore ExecutedDocument { get; set; } public virtual string Status { get; set; } public virtual string Reason { get; set; } public virtual string CreatedBy { get; set; } public virtual DateTime CreatedOn { get; set; } } I've omitted the filestore class, which is basically a key/value lookup to find the document based on its guid. The VendorContractStatus is mapped as a many-to-one in Nhibernate. I then use a custom validator as described here. If anything but draft is returned in the VendorContractStatus collection, no changes are allowed. Furthermore the VendorContractStatus must follow the correct workflow (you can add a rejected after a pending, but you can't add anything else to the collection if a reject or accepted exists, etc.). All sounds alright? Well a colleague has argued that we should simply add an "IsDraft" bool property to VendorContract and not accept updates if IsDraft is false. Then we should setup a method inside of VendorContractStatus for updating the status, if something gets added after a draft, it sets the IsDraft property of VendorContract to false. I do not like this as it feels like I'm dirtying up the POCOs and adding logic that should persist in the validation area, that no rules should really exist in these classes and they shouldn't be aware of their states. Any thoughts on this and what is the better practice from a DDD perspective? From my view, if in the future we want more complex rules, my way will be more maintainable over the long run. Say we have contracts over a certain amount to be approved by a manager. I would think it would be better to have a one-to-one mapping with a VendorContractApproval class, rather than adding IsApproved properties, but that's just speculation. This might be splitting hairs, but this is the first real gritty enterprise software project we've done. Any advice would be appreciated!

    Read the article

  • System.Data.SqlClient.SqlException: Must declare the scalar variable "@"

    - by Owala Wilson
    Haloo everyone...I am new to asp.net and I have been trying to query one table, and if the value returns true execute another query. here is my code: protected void Button1_Click(object sender, EventArgs e) { SqlConnection con = new SqlConnection(ConfigurationManager.ConnectionStrings["MyServer"].ConnectionString); SqlConnection con1 = new SqlConnection(ConfigurationManager.ConnectionStrings["MyServer"].ConnectionString); int amounts = Convert.ToInt32(InstallmentPaidBox.Text) + Convert.ToInt32(InterestPaid.Text) + Convert.ToInt32(PenaltyPaid.Text); int loans = Convert.ToInt32(PrincipalPaid.Text) + Convert.ToInt32(InterestPaid.Text); con.Open(); DateTime date = Convert.ToDateTime(DateOfProcessing.Text); int month1 = Convert.ToInt32(date.Month); int year1 = Convert.ToInt32(date.Year); SqlCommand a = new SqlCommand("Select Date,max(Amount)as Amount from Minimum_Amount group by Date",con); SqlDataReader sq = a.ExecuteReader(); while (sq.Read()) { DateTime date2 = Convert.ToDateTime(sq["Date"]); int month2 = Convert.ToInt32(date2.Month); int year2 = Convert.ToInt32(date2.Year); if (month1 == month2 && year1 == year2) { int amount = Convert.ToInt32(sq["Amount"]); string scrip = "<script>alert('Data Successfully Added')</script>"; Page.ClientScript.RegisterClientScriptBlock(this.GetType(), "Added", scrip); int areas = amount - Convert.ToInt32(InstallmentPaidBox.Text); int forwarded = Convert.ToInt32(BalanceBroughtTextBox.Text) + Convert.ToInt32(InstallmentPaidBox.Text); if (firsttime.Checked) { int balance = areas + Convert.ToInt32(PenaltyDue.Text) + Convert.ToInt32(InterestDue.Text); SqlCommand cmd = new SqlCommand("insert into Cash_Position(Member_No,Welfare_Amount, BFWD,Amount,Installment_Paid,Loan_Repayment,Principal_Payment,Interest_Paid,Interest_Due,Loan_Balance,Penalty_Paid,Penalty_Due,Installment_Arrears,CFWD,Balance_Due,Date_Prepared,Prepared_By) values(@a,@b,@c,@d,@e,@)f,@g,@h,@i,@j,@k,@l,@m,@n,@o,@p,@q", con); cmd.Parameters.Add("@a", SqlDbType.Int).Value = MemberNumberTextBox.Text; cmd.Parameters.Add("@b", SqlDbType.Int).Value = WelfareAmount.Text; cmd.Parameters.Add("@c", SqlDbType.Int).Value = BalanceBroughtTextBox.Text; cmd.Parameters.Add("@d", SqlDbType.Int).Value = amounts; cmd.Parameters.Add("@e", SqlDbType.Int).Value = InstallmentPaidBox.Text; cmd.Parameters.Add("@f", SqlDbType.Int).Value = loans; cmd.Parameters.Add("@g", SqlDbType.Int).Value = PrincipalPaid.Text; cmd.Parameters.Add("@h", SqlDbType.Int).Value = InterestDue.Text; cmd.Parameters.Add("@i", SqlDbType.Int).Value = InterestDue.Text; cmd.Parameters.Add("@j", SqlDbType.Int).Value = 0; cmd.Parameters.Add("@k", SqlDbType.Int).Value = PenaltyPaid.Text; cmd.Parameters.Add("@l", SqlDbType.Int).Value = PenaltyDue.Text; cmd.Parameters.Add("@m", SqlDbType.Int).Value = areas; cmd.Parameters.Add("@n", SqlDbType.Int).Value = forwarded; cmd.Parameters.Add("@o", SqlDbType.Int).Value = balance; cmd.Parameters.Add("@p", SqlDbType.Date).Value = date; cmd.Parameters.Add("@q", SqlDbType.Text).Value = PreparedBy.Text; int rows = cmd.ExecuteNonQuery(); con.Close();

    Read the article

  • Data Warehouse ETL slow - change primary key in dimension?

    - by Jubbles
    I have a working MySQL data warehouse that is organized as a star schema and I am using Talend Open Studio for Data Integration 5.1 to create the ETL process. I would like this process to run once per day. I have estimated that one of the dimension tables (dimUser) will have approximately 2 million records and 23 columns. I created a small test ETL process in Talend that worked, but given the amount of data that may need to be updated daily, the current performance will not cut it. It takes the ETL process four minutes to UPDATE or INSERT 100 records to dimUser. If I assumed a linear relationship between the count of records and the amount of time to UPDATE or INSERT, then there is no way the ETL can finish in 3-4 hours (my hope), let alone one day. Since I'm unfamiliar with Java, I wrote the ETL as a Python script and ran into the same problem. Although, I did discover that if I did only INSERT, the process went much faster. I am pretty sure that the bottleneck is caused by the UPDATE statements. The primary key in dimUser is an auto-increment integer. My friend suggested that I scrap this primary key and replace it with a multi-field primary key (in my case, 2-3 fields). Before I rip the test data out of my warehouse and change the schema, can anyone provide suggestions or guidelines related to the design of the data warehouse the ETL process how realistic it is to have an ETL process INSERT or UPDATE a few million records each day will my friend's suggestion significantly help If you need any further information, just let me know and I'll post it. UPDATE - additional information: mysql> describe dimUser; Field Type Null Key Default Extra user_key int(10) unsigned NO PRI NULL auto_increment id_A int(10) unsigned NO NULL id_B int(10) unsigned NO NULL field_4 tinyint(4) unsigned NO 0 field_5 varchar(50) YES NULL city varchar(50) YES NULL state varchar(2) YES NULL country varchar(50) YES NULL zip_code varchar(10) NO 99999 field_10 tinyint(1) NO 0 field_11 tinyint(1) NO 0 field_12 tinyint(1) NO 0 field_13 tinyint(1) NO 1 field_14 tinyint(1) NO 0 field_15 tinyint(1) NO 0 field_16 tinyint(1) NO 0 field_17 tinyint(1) NO 1 field_18 tinyint(1) NO 0 field_19 tinyint(1) NO 0 field_20 tinyint(1) NO 0 create_date datetime NO 2012-01-01 00:00:00 last_update datetime NO 2012-01-01 00:00:00 run_id int(10) unsigned NO 999 I used a surrogate key because I had read that it was good practice. Since, from a business perspective, I want to keep aware of potential fraudulent activity (say for 200 days a user is associated with state X and then the next day they are associated with state Y - they could have moved or their account could have been compromised), so that is why geographic data is kept. The field id_B may have a few distinct values of id_A associated with it, but I am interested in knowing distinct (id_A, id_B) tuples. In the context of this information, my friend suggested that something like (id_A, id_B, zip_code) be the primary key. For the large majority of daily ETL processes (80%), I only expect the following fields to be updated for existing records: field_10 - field_14, last_update, and run_id (this field is a foreign key to my etlLog table and is used for ETL auditing purposes).

    Read the article

  • Infinite sharing system (PHP/MySQLi)

    - by Toine Lille
    I'm working on a discount system for whichever customer shares a product and brings in new customers. Each unique visit = $0.05 off, each new customer = $0.50 off (it's a cheap product so yeah, no big numbers). When a new customer shares the site, the customer initially responsible for the new customer (if any) will get half of the new customer's discount as well. The initial customer would get a fourth for the next level and the new customer half of that, etc, creating a tree or pyramid that way that could be infinite. Initial customer ($1.35 discount: 2 new+3 visits + half of 1 new+2 visits) Visitor ($0) Visitor ($0) New customer ($0.60) Visitor ($0) Visitor ($0) Newer customer ($0) New customer ($0) Visitor ($0) The customers are saved along with their IP addresses (bin2hex(inet_pton)) in a database table (customers) with info like a unique id, e-mail address and first date/time the purchased a product (= time of registration). The shares are saved in a separate table within the same database (sharing). Each unique IP addresses that visits the site creates a new row featuring the IP address (also saved as bin2hex(inet_pton)), the id of the customer who shared it and the date/time of the visit. Sharing goes via URL, featuring a GET element containing the customer's id. Visits and new customers overlap, as visits will always occur before the new customer does. That's fine. The date/times are used just to make it a little more secure (I also use the IP along with cookies to see if people cheat the system). If an IP is already in the sharing or customer tables, it does not count and will not create a new entry. Now the problem is, how to make the infinity happen and apply the different values to it? That's all I'd need to know. It needs to calculate the discount for each customer separately, but also allow for monitoring altogether (though that's just a matter of passing all ID's through it). I figured I'd start (after the database connection) with $stmt = $con->prepare('SELECT ip,datetime FROM sharing WHERE sender=?'); $stmt->bind_param('i',$customerid); $stmt->execute(); $stmt->store_result(); $discount = $discount + ($stmt->num_rows * 0.05); $stmt->bind_result($ip,$timeofsharing); to translate all the visits to $0.05 of discount each. To check for the new customers that came from these visits, I wrote the following: while ($sql->fetch()) { $stmt2 = $con->prepare("SELECT datetime FROM users WHERE ip=?"); $stmt2->bind_param('s',$ip); $stmt2->execute(); $stmt2->store_result(); $stmt2->bind_result($timeofpurchase); Followed by a little more security comparing the datetimes: while ($stmt2->fetch()) { if (strtotime($timeofpurchase) < strtotime($timeofsharing)) { $discount = $discount + $0.50; } But this is just for the initial customer's direct results. If I'd want to check for the next level, I'd basically have to put the exact same check and loop in itself, checking each new customer the initial customer they brought to the site, and then for the next level again to check all of the newer customers, etc, etc. What to do? / Where to go? / What would be the correct practice for this? Thanks!

    Read the article

  • Why one loop is performing better than other memory wise as well as performance wise?

    - by Mohit
    I have following two loops in C#, and I am running these loops for a collection with 10,000 records being downloaded with paging using "yield return" First foreach(var k in collection) { repo.Save(k); } Second var collectionEnum = collection.GetEnumerator(); while (collectionEnum.MoveNext()) { var k = collectionEnum.Current; repo.Save(k); k = null; } Seems like that the second loop consumes less memory and it faster than the first loop. Memory I understand may be because of k being set to null(Even though I am not sure). But how come it is faster than for each. Following is the actual code [Test] public void BechmarkForEach_Test() { bool isFirstTimeSync = true; Func<Contact, bool> afterProcessing = contactItem => { return true; }; var contactService = CreateSerivce("/administrator/components/com_civicrm"); var contactRepo = new ContactRepository(new Mock<ILogger>().Object); contactRepo.Drop(); contactRepo = new ContactRepository(new Mock<ILogger>().Object); Profile("For Each Profiling",1,()=>{ var localenumertaor=contactService.Download(); foreach (var item in localenumertaor) { if (isFirstTimeSync) item.StateFlag = 1; item.ClientTimeStamp = DateTime.UtcNow; if (item.StateFlag == 1) contactRepo.Insert(item); else contactRepo.Update(item); afterProcessing(item); } contactRepo.DeleteAll(); }); } [Test] public void BechmarkWhile_Test() { bool isFirstTimeSync = true; Func<Contact, bool> afterProcessing = contactItem => { return true; }; var contactService = CreateSerivce("/administrator/components/com_civicrm"); var contactRepo = new ContactRepository(new Mock<ILogger>().Object); contactRepo.Drop(); contactRepo = new ContactRepository(new Mock<ILogger>().Object); var itemsCollection = contactService.Download().GetEnumerator(); Profile("While Profiling", 1, () => { while (itemsCollection.MoveNext()) { var item = itemsCollection.Current; //if First time sync then ignore and overwrite the stateflag if (isFirstTimeSync) item.StateFlag = 1; item.ClientTimeStamp = DateTime.UtcNow; if (item.StateFlag == 1) contactRepo.Insert(item); else contactRepo.Update(item); afterProcessing(item); item = null; } contactRepo.DeleteAll(); }); } static void Profile(string description, int iterations, Action func) { // clean up GC.Collect(); GC.WaitForPendingFinalizers(); GC.Collect(); // warm up func(); var watch = Stopwatch.StartNew(); for (int i = 0; i < iterations; i++) { func(); } watch.Stop(); Console.Write(description); Console.WriteLine(" Time Elapsed {0} ms", watch.ElapsedMilliseconds); } I m using the micro bench marking, from a stackoverflow question itself benchmarking-small-code The time taken is For Each Profiling Time Elapsed 5249 ms While Profiling Time Elapsed 116 ms

    Read the article

  • How to select chosen columns from two different entities into one DTO using NHibernate?

    - by Pawel Krakowiak
    I have two classes (just recreating the problem): public class User { public virtual int Id { get; set; } public virtual string FirstName { get; set; } public virtual string LastName { get; set; } public virtual IList<OrgUnitMembership> OrgUnitMemberships { get; set; } } public class OrgUnitMembership { public virtual int UserId { get; set; } public virtual int OrgUnitId { get; set; } public virtual DateTime JoinDate { get; set; } public virtual DateTime LeaveDate { get; set; } } There's a Fluent NHibernate map for both, of course: public class UserMapping : ClassMap<User> { public UserMapping() { Table("Users"); Id(e => e.Id).GeneratedBy.Identity(); Map(e => e.FirstName); Map(e => e.LastName); HasMany(x => x.OrgUnitMemberships) .KeyColumn(TypeReflector<OrgUnitMembership> .GetPropertyName(p => p.UserId))).ReadOnly().Inverse(); } } public class OrgUnitMembershipMapping : ClassMap<OrgUnitMembership> { public OrgUnitMembershipMapping() { Table("OrgUnitMembership"); CompositeId() .KeyProperty(x=>x.UserId) .KeyProperty(x=>x.OrgUnitId); Map(x => x.JoinDate); Map(x => x.LeaveDate); References(oum => oum.OrgUnit) .Column(TypeReflector<OrgUnitMembership> .GetPropertyName(oum => oum.OrgUnitId)).ReadOnly(); References(oum => oum.User) .Column(TypeReflector<OrgUnitMembership> .GetPropertyName(oum => oum.UserId)).ReadOnly(); } } What I want to do is to retrieve some users based on criteria, but I would like to combine all columns from the Users table with some columns from the OrgUnitMemberships table, analogous to a SQL query: select u.*, m.JoinDate, m.LeaveDate from Users u inner join OrgUnitMemberships m on u.Id = m.UserId where m.OrgUnitId = :ouid I am totally lost, I tried many different options. Using a plain SQL query almost works, but because there are some nullable enums in the User class AliasToBean fails to transform, otherwise wrapping a SQL query would work like this: return Session .CreateSQLQuery(sql) .SetParameter("ouid", orgUnitId) .SetResultTransformer(Transformers.AliasToBean<UserDTO>()) .List<UserDTO>() I tried the code below as a test (a few different variants), but I'm not sure what I'm doing. It works partially, I get instances of UserDTO back, the properties coming from OrgUnitMembership (dates) are filled, but all properties from User are null: User user = null; OrgUnitMembership membership = null; UserDTO dto = null; var users = Session.QueryOver(() => user) .SelectList(list => list .Select(() => user.Id) .Select(() => user.FirstName) .Select(() => user.LastName)) .JoinAlias(u => u.OrgUnitMemberships, () => membership) //.JoinQueryOver<OrgUnitMembership>(u => u.OrgUnitMemberships) .SelectList(list => list .Select(() => membership.JoinDate).WithAlias(() => dto.JoinDate) .Select(() => membership.LeaveDate).WithAlias(() => dto.LeaveDate)) .TransformUsing(Transformers.AliasToBean<UserDTO>()) .List<UserDTO>();

    Read the article

  • Form gets submitted Multiple times

    - by rasika vijay
    While clicking inside the form , the form automatically tried to resubmit , I cannot figure out which part of the code causes it to behave like this . In the createTable function , a table is created after the domain is given . But I am unable to select any of the controls in the output . I have attached the jsfiddle code link here : http://jsfiddle.net/rasikaceg/S7kWM/ function createTable() { document.getElementById("table_container").innerHTML = ""; var input_domain = document.forms["form1"]["DomainName"].value; if (input_domain == null || input_domain == "") return; var table = document.createElement("table"), tablehead = document.createElement("thead"), theadrow = document.createElement("tr"), th1 = document.createElement("th"), th2 = document.createElement("th"), th3 = document.createElement("th"), th4 = document.createElement("th"); th1.appendChild(document.createTextNode("Website")); th2.appendChild(document.createTextNode("Enable/Disable Live Update for LM and CBD")); th3.appendChild(document.createTextNode("From Date")); th4.appendChild(document.createTextNode("To Date")); theadrow.appendChild(th1); theadrow.appendChild(th2); theadrow.appendChild(th3); theadrow.appendChild(th4); tablehead.appendChild(theadrow); table.appendChild(tablehead); var names = ["website1", "website2"]; var container = document.getElementById("table_container"); var tablebody = document.createElement("tbody"); for (var i = 0, len = names.length; i < len; ++i) { var row = document.createElement("tr"), column1 = document.createElement("td"), column2 = document.createElement("td"), column3 = document.createElement("td"), column4 = document.createElement("td"), checkbox = document.createElement('input'); checkbox.type = "checkbox"; checkbox.name = names[i]; checkbox.value = names[i]; checkbox.id = names[i]; var label = document.createElement('label') label.htmlFor = names[i]; label.appendChild(document.createTextNode(names[i])); column1.appendChild(checkbox); column1.appendChild(label); var dropdown = document.createElement("select"); dropdown.name = names[i] + "_select"; var op1 = new Option(); op1.value = "enable"; op1.text = "enable"; var op2 = new Option(); op2.value = "disable"; op2.text = "disable"; dropdown.options.add(op1); dropdown.options.add(op2); column2.appendChild(dropdown); var datetime_from = document.createElement('input'); datetime_from.type = "datetime-local"; datetime_from.name = names[i] + "_from"; column3.appendChild(datetime_from); var datetime_to = document.createElement('input'); datetime_to.type = "datetime-local"; datetime_to.name = names[i] + "_to"; column4.appendChild(datetime_to); row.appendChild(column1); row.appendChild(column2); row.appendChild(column3); row.appendChild(column4); tablebody.appendChild(row); } table.appendChild(tablebody); document.getElementById("table_container").appendChild(table); }

    Read the article

  • L2S DataContext out of synch: row not found or changed

    - by awrigley
    The Problem I am getting a number of errors that imply that the DataContext, or rather, the way I am using the DataContext is getting out of synch. The error occurs on db.SubmitChanges() where db is my DataContext instance. The error is: Row not found or changed. The problem only occurs intermitently, for example, adding a row then deleting it. If I stop the dev server and restart, the added row is there and I can delete it no problem. Ie, it seems that the problem is related to the DataContext losing track of the rows that have been added. IMPORTANT: Before anyone votes to close this thread, on the basis of it being a duplicate, I have checked the sql server profiler and there is no "Where 0 = 1" in the SQL. I have also recreated the dbml file, so am satisfied that the database schema is in synch with the schema represented by the dbml file. Ie, no cases of mismatched nullable/not nullable columns, etc. My Diagnosis (for what it is worth): It seems to be a problem related to how I am using the DataContext. I am new to MVC, Repositories and Services patterns, so suspect that I have wired things up wrong. The Setup Simple eLearning app in its early stages. Pupils need to be able to add and delete courses (Courses table) to their UserCourses. To do this, I have a service that gets a specific DataContext instance Dependency Injected into its constructor. Service Class Constructor: public class SqlPupilBlockService : IPupilBlockService { DataContext db; public SqlPupilBlockService(DataContext db) { this.db = db; CoursesRepository = new SqlRepository<Course>(db); UserCoursesRepository = new SqlRepository<UserCourse>(db); } // Etc, etc } The CoursesRepository and UserCoursesRepository are both private properties of the service class that are of Type IRepository (just a simple generic repository interface). Sql Respoitory Code: public class SqlRepository<T> : IRepository<T> where T : class { DataContext db; public SqlRepository(DataContext db) { this.db = db; } #region IRepository<T> Members public IQueryable<T> Query { get { return db.GetTable<T>(); } } public List<T> FetchAll() { return Query.ToList(); } public void Add(T entity) { db.GetTable<T>().InsertOnSubmit(entity); } public void Delete(T entity) { db.GetTable<T>().DeleteOnSubmit(entity); } public void Save() { db.SubmitChanges(); } #endregion } The two methods for adding and deleting UserCourses are: Service Methods for Adding and Deleting UserCourses: public void AddUserCourse(int courseId) { UserCourse uc = new UserCourse(); uc.IdCourse = courseId; uc.IdUser = UserId; uc.DateCreated = DateTime.Now; uc.DateAmended = DateTime.Now; uc.Role = "Pupil"; uc.CourseNotes = string.Empty; uc.ActiveStepIndex = 0; UserCoursesRepository.Add(uc); UserCoursesRepository.Save(); } public void DeleteUserCourse(int courseId) { var uc = (UserCoursesRepository.Query.Where(x => x.IdUser == UserId && x.IdCourse == courseId)).Single(); UserCoursesRepository.Delete(uc); UserCoursesRepository.Save(); } Ajax I am using Ajax via Ajax.BeginForm I don't think that is relevant. ASP.NET MVC 3 I am using mvc3, but don't think that is relevant: the errors are related to model code.

    Read the article

  • .NET GDI+ image size - file codec limitations

    - by roygbiv
    Is there a limit on the size of image that can be encoded using the image file codecs available from .NET? I'm trying to encode images 4GB in size, but it simply does not work (or does not work properly i.e. writes out an unreadable file) with .bmp, .jpg, .png or the .tif encoders. When I lower the image size to < 2GB it does work with the .jpg but not the .bmp, .tif or .png. My next attempt would be to try libtiff because I know tiff files are meant for large images. What is a good file format for large images? or am I just hitting the file format limitations? Random r = new Random((int)DateTime.Now.Ticks); int width = 64000; int height = 64000; int stride = (width % 4) > 0 ? width + (width % 4) : width; UIntPtr dataSize = new UIntPtr((ulong)stride * (ulong)height); IntPtr p = Program.VirtualAlloc(IntPtr.Zero, dataSize, Program.AllocationType.COMMIT | Program.AllocationType.RESERVE, Program.MemoryProtection.READWRITE); Bitmap bmp = new Bitmap(width, height, stride, PixelFormat.Format8bppIndexed, p); BitmapData bd = bmp.LockBits(new Rectangle(0, 0, bmp.Width, bmp.Height), ImageLockMode.ReadWrite, bmp.PixelFormat); ColorPalette cp = bmp.Palette; for (int i = 0; i < cp.Entries.Length; i++) { cp.Entries[i] = Color.FromArgb(i, i, i); } bmp.Palette = cp; unsafe { for (int y = 0; y < bd.Height; y++) { byte* row = (byte*)bd.Scan0.ToPointer() + (y * bd.Stride); for (int x = 0; x < bd.Width; x++) { *(row + x) = (byte)r.Next(256); } } } bmp.UnlockBits(bd); bmp.Save(@"c:\test.jpg", ImageFormat.Jpeg); bmp.Dispose(); Program.VirtualFree(p, UIntPtr.Zero, 0x8000); I have also tried using a pinned GC memory region, but this is limited to < 2GB. Random r = new Random((int)DateTime.Now.Ticks); int bytesPerPixel = 4; int width = 4000; int height = 4000; int padding = 4 - ((width * bytesPerPixel) % 4); padding = (padding == 4 ? 0 : padding); int stride = (width * bytesPerPixel) + padding; UInt32[] pixels = new UInt32[width * height]; GCHandle gchPixels = GCHandle.Alloc(pixels, GCHandleType.Pinned); using (Bitmap bmp = new Bitmap(width, height, stride, PixelFormat.Format32bppPArgb, gchPixels.AddrOfPinnedObject())) { for (int y = 0; y < height; y++) { int row = (y * width); for (int x = 0; x < width; x++) { pixels[row + x] = (uint)r.Next(); } } bmp.Save(@"c:\test.jpg", ImageFormat.Jpeg); } gchPixels.Free();

    Read the article

  • LINQ-like or SQL-like DSL for end-users to run queries to select (not modify) data?

    - by Mark Rushakoff
    For a utility I'm working on, the client would like to be able to generate graphic reports on the data that has been collected. I can already generate a couple canned graphs (using ZedGraph, which is a very nice library); however, the utility would be much more flexible if the graphs were more programmable or configurable by the end-user. TLDR version I want users to be able to use something like SQL to safely extract and select data from a List of objects that I provide and can describe. What free tools or libraries will help me accomplish this? Full version I've given thought to using IronPython, IronRuby, and LuaInterface, but frankly they're all a bit overpowered for what I want to do. My classes are fairly simple, along the lines of: class Person: string Name; int HeightInCm; DateTime BirthDate; Weight[] WeighIns; class Weight: int WeightInKg; DateTime Date; Person Owner; (exact classes have been changed to protect the innocent). To come up with the data for the graph, the user will choose whether it's a bar graph, scatter plot, etc., and then to actually obtain the data, I would like to obtain some kind of List from the user simply entering something SQL-ish along the lines of SELECT Name, AVG(WeighIns) FROM People SELECT WeightInKg, Owner.HeightInCm FROM Weights And as a bonus, it would be nice if you could actually do operations as well: SELECT WeightInKg, (Date - Owner.BirthDate) AS Age FROM Weights The DSL doesn't have to be compliant SQL in any way; it doesn't even have to resemble SQL, but I can't think of a more efficient descriptive language for the task. I'm fine filling in blanks; I don't expect a library to do everything for me. What I would expect to exist (but haven't been able to find in any way, shape, or form) is something like Fluent NHibernate (which I am already using in the project) where I can declare a mapping, something like var personRequest = Request<Person>(); personRequest.Item("Name", (p => p.Name)); personRequest.Item("HeightInCm", (p => p.HeightInCm)); personRequest.Item("HeightInInches", (p => p.HeightInCm * CM_TO_INCHES)); // ... var weightRequest = Request<Weight>(); weightRequest.Item("Owner", (w => w.Owner), personRequest); // Indicate a chain to personRequest // ... var people = Table<Person>("People", GetPeopleFromDatabase()); var weights = Table<Weight>("Weights", GetWeightsFromDatabase()); // ... TryRunQuery(userInputQuery); LINQ is so close to what I want to do, but AFAIK there's no way to sandbox it. I don't want to expose any unnecessary functionality to the end user; meaning I don't want the user to be able to send in and process: from p in people select (p => { System.IO.File.Delete("C:\\something\\important"); return p.Name }) So does anyone know of any free .NET libraries that allow something like what I've described above? Or is there some way to sandbox LINQ? cs-script is close too, but it doesn't seem to offer sandboxing yet either. I'd be hesitant to expose the NHibernate interface either, as the user should have a read-only view of the data at this point in the usage. I'm using C# 3.5, and pure .NET solutions would be preferred. The bottom line is that I'm really trying to avoid writing my own parser for a subset of SQL that would only apply to this single project.

    Read the article

  • How to make MySQL utilize available system resources, or find "the real problem"?

    - by anonymous coward
    This is a MySQL 5.0.26 server, running on SuSE Enterprise 10. This may be a Serverfault question. The web user interface that uses these particular queries (below) is showing sometimes 30+, even up to 120+ seconds at the worst, to generate the pages involved. On development, when the queries are run alone, they take up to 20 seconds on the first run (with no query cache enabled) but anywhere from 2 to 7 seconds after that - I assume because the tables and indexes involved have been placed into ram. From what I can tell, the longest load times are caused by Read/Update Locking. These are MyISAM tables. So it looks like a long update comes in, followed by a couple 7 second queries, and they're just adding up. And I'm fine with that explanation. What I'm not fine with is that MySQL doesn't appear to be utilizing the hardware it's on, and while the bottleneck seems to be the database, I can't understand why. I would say "throw more hardware at it", but we did and it doesn't appear to have changed the situation. Viewing a 'top' during the slowest times never shows much cpu or memory utilization by mysqld, as if the server is having no trouble at all - but then, why are the queries taking so long? How can I make MySQL use the crap out of this hardware, or find out what I'm doing wrong? Extra Details: On the "Memory Health" tab in the MySQL Administrator (for Windows), the Key Buffer is less than 1/8th used - so all the indexes should be in RAM. I can provide a screen shot of any graphs that might help. So desperate to fix this issue. Suffice it to say, there is legacy code "generating" these queries, and they're pretty much stuck the way they are. I have tried every combination of Indexes on the tables involved, but any suggestions are welcome. Here's the current Create Table statement from development (the 'experimental' key I have added, seems to help a little, for the example query only): CREATE TABLE `registration_task` ( `id` varchar(36) NOT NULL default '', `date_entered` datetime NOT NULL default '0000-00-00 00:00:00', `date_modified` datetime NOT NULL default '0000-00-00 00:00:00', `assigned_user_id` varchar(36) default NULL, `modified_user_id` varchar(36) default NULL, `created_by` varchar(36) default NULL, `name` varchar(80) NOT NULL default '', `status` varchar(255) default NULL, `date_due` date default NULL, `time_due` time default NULL, `date_start` date default NULL, `time_start` time default NULL, `parent_id` varchar(36) NOT NULL default '', `priority` varchar(255) NOT NULL default '9', `description` text, `order_number` int(11) default '1', `task_number` int(11) default NULL, `depends_on_id` varchar(36) default NULL, `milestone_flag` varchar(255) default NULL, `estimated_effort` int(11) default NULL, `actual_effort` int(11) default NULL, `utilization` int(11) default '100', `percent_complete` int(11) default '0', `deleted` tinyint(1) NOT NULL default '0', `wf_task_id` varchar(36) default '0', `reg_field` varchar(8) default '', `date_offset` int(11) default '0', `date_source` varchar(10) default '', `date_completed` date default '0000-00-00', `completed_id` varchar(36) default NULL, `original_name` varchar(80) default NULL, PRIMARY KEY (`id`), KEY `idx_reg_task_p` (`deleted`,`parent_id`), KEY `By_Assignee` (`assigned_user_id`,`deleted`), KEY `status_assignee` (`status`,`deleted`), KEY `experimental` (`deleted`,`status`,`assigned_user_id`,`parent_id`,`date_due`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 And one of the ridiculous queries in question: SELECT users.user_name assigned_user_name, registration.FIELD001 parent_name, registration_task.status status, registration_task.date_modified date_modified, registration_task.date_due date_due, registration.FIELD240 assigned_wf, if(LENGTH(registration_task.description)>0,1,0) has_description, registration_task.* FROM registration_task LEFT JOIN users ON registration_task.assigned_user_id=users.id LEFT JOIN registration ON registration_task.parent_id=registration.id where (registration_task.status != 'Completed' AND registration.FIELD001 LIKE '%' AND registration_task.name LIKE '%' AND registration.FIELD060 LIKE 'GN001472%') AND registration_task.deleted=0 ORDER BY date_due asc LIMIT 0,20; my.cnf - '[mysqld]' section. [mysqld] port = 3306 socket = /var/lib/mysql/mysql.sock skip-locking key_buffer = 384M max_allowed_packet = 100M table_cache = 2048 sort_buffer_size = 2M net_buffer_length = 100M read_buffer_size = 2M read_rnd_buffer_size = 160M myisam_sort_buffer_size = 128M query_cache_size = 16M query_cache_limit = 1M EXPLAIN above query, without additional index: +----+-------------+-------------------+--------+--------------------------------+----------------+---------+------------------------------------------------+---------+-----------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------------------+--------+--------------------------------+----------------+---------+------------------------------------------------+---------+-----------------------------+ | 1 | SIMPLE | registration_task | ref | idx_reg_task_p,status_assignee | idx_reg_task_p | 1 | const | 1067354 | Using where; Using filesort | | 1 | SIMPLE | registration | eq_ref | PRIMARY,gbl | PRIMARY | 8 | sugarcrm401.registration_task.parent_id | 1 | Using where | | 1 | SIMPLE | users | ref | PRIMARY | PRIMARY | 38 | sugarcrm401.registration_task.assigned_user_id | 1 | | +----+-------------+-------------------+--------+--------------------------------+----------------+---------+------------------------------------------------+---------+-----------------------------+ EXPLAIN above query, with 'experimental' index: +----+-------------+-------------------+--------+-----------------------------------------------------------+------------------+---------+------------------------------------------------+--------+-----------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------------------+--------+-----------------------------------------------------------+------------------+---------+------------------------------------------------+--------+-----------------------------+ | 1 | SIMPLE | registration_task | range | idx_reg_task_p,status_assignee,NewIndex1,tcg_experimental | tcg_experimental | 259 | NULL | 103345 | Using where; Using filesort | | 1 | SIMPLE | registration | eq_ref | PRIMARY,gbl | PRIMARY | 8 | sugarcrm401.registration_task.parent_id | 1 | Using where | | 1 | SIMPLE | users | ref | PRIMARY | PRIMARY | 38 | sugarcrm401.registration_task.assigned_user_id | 1 | | +----+-------------+-------------------+--------+-----------------------------------------------------------+------------------+---------+------------------------------------------------+--------+-----------------------------+

    Read the article

  • Tool or library for end-users to run queries to select (not modify) data?

    - by Mark Rushakoff
    For a utility I'm working on, the client would like to be able to generate graphic reports on the data that has been collected. I can already generate a couple canned graphs (using ZedGraph, which is a very nice library); however, the utility would be much more flexible if the graphs were more programmable or configurable by the end-user. TLDR version I want users to be able to use something like SQL to safely extract and select data from a List of objects that I provide and can describe. What free tools or libraries will help me accomplish this? Full version I've given thought to using IronPython, IronRuby, and LuaInterface, but frankly they're all a bit overpowered for what I want to do. My classes are fairly simple, along the lines of: class Person: string Name; int HeightInCm; DateTime BirthDate; Weight[] WeighIns; class Weight: int WeightInKg; DateTime Date; Person Owner; (exact classes have been changed to protect the innocent). To come up with the data for the graph, the user will choose whether it's a bar graph, scatter plot, etc., and then to actually obtain the data, I would like to obtain some kind of List from the user simply entering something SQL-ish along the lines of SELECT Name, AVG(WeighIns) FROM People SELECT WeightInKg, Owner.HeightInCm FROM Weights And as a bonus, it would be nice if you could actually do operations as well: SELECT WeightInKg, (Date - Owner.BirthDate) AS Age FROM Weights The DSL doesn't have to be compliant SQL in any way; it doesn't even have to resemble SQL, but I can't think of a more efficient descriptive language for the task. I'm fine filling in blanks; I don't expect a library to do everything for me. What I would expect to exist (but haven't been able to find in any way, shape, or form) is something like Fluent NHibernate (which I am already using in the project) where I can declare a mapping, something like var personRequest = Request<Person>(); personRequest.Item("Name", (p => p.Name)); personRequest.Item("HeightInCm", (p => p.HeightInCm)); personRequest.Item("HeightInInches", (p => p.HeightInCm * CM_TO_INCHES)); // ... var weightRequest = Request<Weight>(); weightRequest.Item("Owner", (w => w.Owner), personRequest); // Indicate a chain to personRequest // ... var people = Table<Person>("People", GetPeopleFromDatabase()); var weights = Table<Weight>("Weights", GetWeightsFromDatabase()); // ... TryRunQuery(userInputQuery); LINQ is so close to what I want to do, but AFAIK there's no way to sandbox it. I don't want to expose any unnecessary functionality to the end user; meaning I don't want the user to be able to send in and process: from p in people select (p => { System.IO.File.Delete("C:\\something\\important"); return p.Name }) So does anyone know of any free .NET libraries that allow something like what I've described above? Or is there some way to sandbox LINQ? cs-script is close too, but it doesn't seem to offer sandboxing yet either. I'd be hesitant to expose the NHibernate interface either, as the user should have a read-only view of the data at this point in the usage. I'm using C# 3.5, and pure .NET solutions would be preferred. The bottom line is that I'm really trying to avoid writing my own parser for a subset of SQL that would only apply to this single project.

    Read the article

  • Dynamic JSON Parsing in .NET with JsonValue

    - by Rick Strahl
    So System.Json has been around for a while in Silverlight, but it's relatively new for the desktop .NET framework and now moving into the lime-light with the pending release of ASP.NET Web API which is bringing a ton of attention to server side JSON usage. The JsonValue, JsonObject and JsonArray objects are going to be pretty useful for Web API applications as they allow you dynamically create and parse JSON values without explicit .NET types to serialize from or into. But even more so I think JsonValue et al. are going to be very useful when consuming JSON APIs from various services. Yes I know C# is strongly typed, why in the world would you want to use dynamic values? So many times I've needed to retrieve a small morsel of information from a large service JSON response and rather than having to map the entire type structure of what that service returns, JsonValue actually allows me to cherry pick and only work with the values I'm interested in, without having to explicitly create everything up front. With JavaScriptSerializer or DataContractJsonSerializer you always need to have a strong type to de-serialize JSON data into. Wouldn't it be nice if no explicit type was required and you could just parse the JSON directly using a very easy to use object syntax? That's exactly what JsonValue, JsonObject and JsonArray accomplish using a JSON parser and some sweet use of dynamic sauce to make it easy to access in code. Creating JSON on the fly with JsonValue Let's start with creating JSON on the fly. It's super easy to create a dynamic object structure. JsonValue uses the dynamic  keyword extensively to make it intuitive to create object structures and turn them into JSON via dynamic object syntax. Here's an example of creating a music album structure with child songs using JsonValue:[TestMethod] public void JsonValueOutputTest() { // strong type instance var jsonObject = new JsonObject(); // dynamic expando instance you can add properties to dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; album.Artist = "AC/DC"; album.YearReleased = 1977; album.Songs = new JsonArray() as dynamic; dynamic song = new JsonObject(); song.SongName = "Dirty Deeds Done Dirt Cheap"; song.SongLength = "4:11"; album.Songs.Add(song); song = new JsonObject(); song.SongName = "Love at First Feel"; song.SongLength = "3:10"; album.Songs.Add(song); Console.WriteLine(album.ToString()); } This produces proper JSON just as you would expect: {"AlbumName":"Dirty Deeds Done Dirt Cheap","Artist":"AC\/DC","YearReleased":1977,"Songs":[{"SongName":"Dirty Deeds Done Dirt Cheap","SongLength":"4:11"},{"SongName":"Love at First Feel","SongLength":"3:10"}]} The important thing about this code is that there's no explicitly type that is used for holding the values to serialize to JSON. I am essentially creating this value structure on the fly by adding properties and then serialize it to JSON. This means this code can be entirely driven at runtime without compile time restraints of structure for the JSON output. Here I use JsonObject() to create a new object and immediately cast it to dynamic. JsonObject() is kind of similar in behavior to ExpandoObject in that it allows you to add properties by simply assigning to them. Internally, JsonValue/JsonObject these values are stored in pseudo collections of key value pairs that are exposed as properties through the DynamicObject functionality in .NET. The syntax gets a little tedious only if you need to create child objects or arrays that have to be explicitly defined first. Other than that the syntax looks like normal object access sytnax. Always remember though these values are dynamic - which means no Intellisense and no compiler type checking. It's up to you to ensure that the values you create are accessed consistently and without typos in your code. Note that you can also access the JsonValue instance directly and get access to the underlying type. This means you can assign properties by string, which can be useful for fully data driven JSON generation from other structures. Below you can see both styles of access next to each other:// strong type instance var jsonObject = new JsonObject(); // you can explicitly add values here jsonObject.Add("Entered", DateTime.Now); // expando style instance you can just 'use' properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; JsonValue internally stores properties keys and values in collections and you can iterate over them at runtime. You can also manipulate the collections if you need to to get the object structure to look exactly like you want. Again, if you've used ExpandoObject before JsonObject/Value are very similar in the behavior of the structure. Reading JSON strings into JsonValue The JsonValue structure supports importing JSON via the Parse() and Load() methods which can read JSON data from a string or various streams respectively. Essentially JsonValue includes the core JSON parsing to turn a JSON string into a collection of JsonValue objects that can be then referenced using familiar dynamic object syntax. Here's a simple example:[TestMethod] public void JsonValueParsingTest() { var jsonString = @"{""Name"":""Rick"",""Company"":""West Wind"",""Entered"":""2012-03-16T00:03:33.245-10:00""}"; dynamic json = JsonValue.Parse(jsonString); // values require casting string name = json.Name; string company = json.Company; DateTime entered = json.Entered; Assert.AreEqual(name, "Rick"); Assert.AreEqual(company, "West Wind"); } The JSON string represents an object with three properties which is parsed into a JsonValue object and cast to dynamic. Once cast to dynamic I can then go ahead and access the object using familiar object syntax. Note that the actual values - json.Name, json.Company, json.Entered - are actually of type JsonPrimitive and I have to assign them to their appropriate types first before I can do type comparisons. The dynamic properties will automatically cast to the right type expected as long as the compiler can resolve the type of the assignment or usage. The AreEqual() method oesn't as it expects two object instances and comparing json.Company to "West Wind" is comparing two different types (JsonPrimitive to String) which fails. So the intermediary assignment is required to make the test pass. The JSON structure can be much more complex than this simple example. Here's another example of an array of albums serialized to JSON and then parsed through with JsonValue():[TestMethod] public void JsonArrayParsingTest() { var jsonString = @"[ { ""Id"": ""b3ec4e5c"", ""AlbumName"": ""Dirty Deeds Done Dirt Cheap"", ""Artist"": ""AC/DC"", ""YearReleased"": 1977, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/61kTaH-uZBL._AA115_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/B00008BXJ4/ref=as_li_ss_tl?ie=UTF8&tag=westwindtechn-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=B00008BXJ4"", ""Songs"": [ { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Dirty Deeds Done Dirt Cheap"", ""SongLength"": ""4:11"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Love at First Feel"", ""SongLength"": ""3:10"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Big Balls"", ""SongLength"": ""2:38"" } ] }, { ""Id"": ""67280fb8"", ""AlbumName"": ""Echoes, Silence, Patience & Grace"", ""Artist"": ""Foo Fighters"", ""YearReleased"": 2007, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/41mtlesQPVL._SL500_AA280_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/B000UFAURI/ref=as_li_ss_tl?ie=UTF8&tag=westwindtechn-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=B000UFAURI"", ""Songs"": [ { ""AlbumId"": ""67280fb8"", ""SongName"": ""The Pretender"", ""SongLength"": ""4:29"" }, { ""AlbumId"": ""67280fb8"", ""SongName"": ""Let it Die"", ""SongLength"": ""4:05"" }, { ""AlbumId"": ""67280fb8"", ""SongName"": ""Erase/Replay"", ""SongLength"": ""4:13"" } ] }, { ""Id"": ""7b919432"", ""AlbumName"": ""End of the Silence"", ""Artist"": ""Henry Rollins Band"", ""YearReleased"": 1992, ""Entered"": ""2012-03-16T00:13:12.2800521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/51FO3rb1tuL._SL160_AA160_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/End-Silence-Rollins-Band/dp/B0000040OX/ref=sr_1_5?ie=UTF8&qid=1302232195&sr=8-5"", ""Songs"": [ { ""AlbumId"": ""7b919432"", ""SongName"": ""Low Self Opinion"", ""SongLength"": ""5:24"" }, { ""AlbumId"": ""7b919432"", ""SongName"": ""Grip"", ""SongLength"": ""4:51"" } ] } ]"; dynamic albums = JsonValue.Parse(jsonString); foreach (dynamic album in albums) { Console.WriteLine(album.AlbumName + " (" + album.YearReleased.ToString() + ")"); foreach (dynamic song in album.Songs) { Console.WriteLine("\t" + song.SongName ); } } Console.WriteLine(albums[0].AlbumName); Console.WriteLine(albums[0].Songs[1].SongName);}   It's pretty sweet how easy it becomes to parse even complex JSON and then just run through the object using object syntax, yet without an explicit type in the mix. In fact it looks and feels a lot like if you were using JavaScript to parse through this data, doesn't it? And that's the point…© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  Web Api  JSON   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • Directory structure for a website (js/css/img folders)

    - by nightcoder
    For years I've been using the following directory structure for my websites: <root> ->js ->jquery.js ->tooltip.js ->someplugin.js ->css ->styles.css ->someplugin.css ->images -> all website images... it seemed perfectly fine to me until I began to use different 3rd-party components. For example, today I've downloaded a datetime picker javascript component that looks for its images in the same directory where its css file is located (css file contains urls like "url('calendar.png')"). So now I have 3 options: 1) put datepicker.css into my css directory and put its images along. I don't really like this option because I will have both css and image files inside the css directory and it is weird. Also I might meet files from different components with the same name, such as 2 different components, which link to background.png from their css files. I will have to fix those name collisions (by renaming one of the files and editing the corresponding file that contains the link). 2) put datepicker.css into my css directory, put its images into the images directory and edit datepicker.css to look for the images in the images directory. This option is ok but I have to spend some time to edit 3rd-party components to fit them to my site structure. Again, name collisions may occur here (as described in the previous option) and I will have to fix them. 3) put datepicker.js, datepicker.css and its images into a separate directory, let's say /3rdParty/datepicker/ and place the files as it was intended by the author (i.e., for example, /3rdParty/datepicker/css/datepicker.css, /3rdParty/datepicker/css/something.png, etc.). Now I begin to think that this option is the most correct. Experienced web developers, what do you recommend?

    Read the article

  • CodePlex Daily Summary for Wednesday, January 12, 2011

    CodePlex Daily Summary for Wednesday, January 12, 2011Popular ReleasesGoogle URL Shortener API for .NET: Google URL Shortener API v1: According follow specification: http://code.google.com/apis/urlshortener/v1/reference.htmljGestures: a jQuery plugin for gesture events: 0.81: added event substitution for IE updated index.htmlStyleCop for ReSharper: StyleCop for ReSharper 5.1.14986.000: A considerable amount of work has gone into this release: Features: Huge focus on performance around the violation scanning subsystem: - caching added to reduce IO operations around reading and merging of settings files - caching added to reduce creation of expensive objects Users should notice condsiderable perf boost and a decrease in memory usage. Bug Fixes: - StyleCop's new ObjectBasedEnvironment object does not resolve the StyleCop installation path, thus it does not return the ...SQL Monitor - tracking sql server activities: SQL Monitor 3.1 beta 1: 1. support alert message template 2. dynamic toolbar commands depending on functionality 3. fixed some bugs 4. refactored part of the code, now more stable and more clean upFacebook C# SDK: 4.2.1: - Authentication bug fixes - Updated Json.Net to version 4.0.0 - BREAKING CHANGE: Removed cookieSupport config setting, now automatic. This download is also availible on NuGet: Facebook FacebookWeb FacebookWebMvcUmbraco CMS: Umbraco 4.6: The Umbraco 4.6 (codename JUNO) release contains many new features focusing on an improved installation experience, a number of robust developer features, and contains nearly 200 bug fixes since the 4.5.2 release. Improved installer experience Updated Starter Kits (Simple, Blog, Personal, Business) Beautiful, free, customizable skins included Skinning engine and Skin customization (see Skinning Documentation Kit) Default dashboards on install with hide option Updated Login timeout ...ArcGIS Editor for OpenStreetMap: ArcGIS Editor for OpenStreetMap 1.1 beta2: This is the beta2 release for the ArcGIS Editor for OpenStreetMap version 1.1. Changes from version 1.0: Multi-part geometries are now supported. Homogeneous relations (consisting of only lines or only polygons) are converted into the appropriate multi-part geometry. Mixed relations and super relations are maintained and tracked in a stand-alone relation table. The underlying editing logic has changed. As opposed to tracking the editing changes upon "Save edit" or "Stop edit" the changes a...Hawkeye - The .Net Runtime Object Editor: Hawkeye 1.2.5: In the case you are running an x86 Windows and you installed Release 1.2.4, you should consider upgrading to this release (1.2.5) as it appears Hawkeye is broken on x86 OS. I apologize for the inconvenience, but it appears Hawkeye 1.2.4 (and probably previous versions) doesn't run on x86 Windows (See issue http://hawkeye.codeplex.com/workitem/7791). This maintenance release fixes this broken behavior. This release comes in two flavors: Hawkeye.125.N2 is the standard .NET 2 build, was compile...Phalanger - The PHP Language Compiler for the .NET Framework: 2.0 (January 2011): Another release build for daily use; it contains many new features, enhanced compatibility with latest PHP opensource applications and several issue fixes. To improve the performance of your application using MySQL, please use Managed MySQL Extension for Phalanger. Changes made within this release include following: New features available only in Phalanger. Full support of Multi-Script-Assemblies was implemented; you can build your application into several DLLs now. Deploy them separately t...EnhSim: EnhSim 2.3.0: 2.3.0This release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Changed how flame shoc...AutoLoL: AutoLoL v1.5.3: A message will be displayed when there's an update available Shows a list of recent mastery files in the Editor Tab (requested by quite a few people) Updater: Update information is now scrollable Added a buton to launch AutoLoL after updating is finished Updated the UI to match that of AutoLoL Fix: Detects and resolves 'Read Only' state on Version.xmlTweetSharp: TweetSharp v2.0.0.0 - Preview 7: Documentation for this release may be found at http://tweetsharp.codeplex.com/wikipage?title=UserGuide&referringTitle=Documentation. Note: This code is currently preview quality. Preview 7 ChangesFixes the regression issue in OAuth from Preview 6 Preview 6 ChangesMaintenance release with user reported fixes Preview 5 ChangesMaintenance release with user reported fixes Third Party Library VersionsHammock v1.0.6: http://hammock.codeplex.com Json.NET 3.5 Release 8: http://json.codeplex.comExtended WPF Toolkit: Extended WPF Toolkit - 1.3.0: What's in the 1.3.0 Release?BusyIndicator ButtonSpinner ChildWindow ColorPicker - Updated (Breaking Changes) DateTimeUpDown - New Control Magnifier - New Control MaskedTextBox - New Control MessageBox NumericUpDown RichTextBox RichTextBoxFormatBar - Updated .NET 3.5 binaries and SourcePlease note: The Extended WPF Toolkit 3.5 is dependent on .NET Framework 3.5 and the WPFToolkit. You must install .NET Framework 3.5 and the WPFToolkit in order to use any features in the To...sNPCedit: sNPCedit v0.9d: added elementclient coordinate catcher to catch coordinates select a target (ingame) i.e. your char, npc or monster than click the button and coordinates+direction will be transfered to the selected row in the table corrected labels from Rot to Direction (because it is a vector)Ionics Isapi Rewrite Filter: 2.1 latest stable: V2.1 is stable, and is in maintenance mode. This is v2.1.1.25. It is a bug-fix release. There are no new features. 28629 29172 28722 27626 28074 29164 27659 27900 many documentation updates and fixes proper x64 build environment. This release includes x64 binaries in zip form, but no x64 MSI file. You'll have to manually install x64 servers, following the instructions in the documentation.VivoSocial: VivoSocial 7.4.1: New release with bug fixes and updates for performance.UltimateJB: Ultimate JB 2.03 PL3 KAKAROTO + HERMES + Spoof 3.5: Voici une version attendu avec impatience pour beaucoup : - La version PL3 KAKAROTO intégre ses dernières modification et intégre maintenant le firmware 2.43 !!! Conclusion : - UltimateJB203PSXXXDEFAULTKAKAROTO=> Pas de spoof mais disponible pour les PS3 suivantes : 3.41_kiosk 3.41 3.40 3.30 3.21 3.15 3.10 3.01 2.76 2.70 2.60 2.53 2.43 - UltimateJB203PS341_HERMES => Pas de spoof mais version hermes 4b - UltimateJB203PS341HERMESSPOOF35X => hermes 4b + spoof des firmwares 3.50 et 3.55 au li....NET Extensions - Extension Methods Library for C# and VB.NET: Release 2011.03: Added lot's of new extensions and new projects for MVC and Entity Framework. object.FindTypeByRecursion Int32.InRange String.RemoveAllSpecialCharacters String.IsEmptyOrWhiteSpace String.IsNotEmptyOrWhiteSpace String.IfEmptyOrWhiteSpace String.ToUpperFirstLetter String.GetBytes String.ToTitleCase String.ToPlural DateTime.GetDaysInYear DateTime.GetPeriodOfDay IEnumberable.RemoveAll IEnumberable.Distinct ICollection.RemoveAll IList.Join IList.Match IList.Cast Array.IsNullOrEmpty Array.W...EFMVC - ASP.NET MVC 3 and EF Code First: EFMVC 0.5- ASP.NET MVC 3 and EF Code First: Demo web app ASP.NET MVC 3, Razor and EF Code FirstVidCoder: 0.8.0: Added x64 version. Made the audio output preview more detailed and accurate. If the chosen encoder or mixdown is incompatible with the source, the fallback that will be used is displayed. Added "Auto" to the audio mixdown choices. Reworked non-anamorphic size calculation to work better with non-standard pixel aspect ratios and cropping. Reworked Custom anamorphic to be more intuitive and allow display width to be set automatically (Thanks, Statick). Allowing higher bitrates for 6-ch...New ProjectsASP.NET MVC Scaffolding: Scaffolding package for ASP.NETAstor: OData Explorer: OData ExplorerBasic Users Community: A simple user community with threads and posts.Bukkit Server Manager: BSM makes server managing easy we have multiple type and database support including: MySql, SQLite types: VPS, Dedicated, Home PCCh4CP: Chamber 4 control programDotNetNuke Telerik Library: A set of Telerik wrappers for DotNetNuke module developers to utilize which aren't yet included as of 5.6.1. Eventually this will be offloaded to the core. Enjoy Life: our fypFolderSizeChecker: It suppose to check the size of big folders in specific partition and help user to find the most disk usage location. (It's simple project so please don't expect big and complex algorithms)HomeTeamOnline: This is project of HomeTeamOnlineICSWorld: This is project of ICSWorldIMAP Client for .NET 4.0 using LumiSoft: Develop an IMAP client using this sample project based on the LumiSoft .NET open source project. This project compiles in .NET 4.0 and demonstrates how to pull email using IMAP. The purpose of the project is for email auto processing.MUIExt (Multilingual User Interface Extender): MUIExt makes it easier for SharePoint 2010 users to create multilingual sites. You'll no longer have to live with the MUI limitations or have to manage variations. It's developed in csharp.Phoenix Service Bus: The goal of this pServiceBus is to provide an API and Service Components that would make implementing an ESB Infrastructure in your environment. It's developed in C#, and also have API written for Javascript Clients PhotoSnapper: Home project just to rename photos or .mov files in a folder starting from from a user defined number.redditfier: A windows application to notify redditors with new posts.SharePoint Field Updater: Automatically update sub fields according to a lookup field. For example: Updating field "Contact" will automatically put "Contact Email" and "Address" in the appropriate text fields.TXLCMS: emptyUmbraco Spark engine: Spark macro engine for UmbracoUrdu Translation: Urdu Translation Project WFTestDesign: BizUnit WF is based on BizUnit solution that allows user to define a test using WorkFlow UI, custom activities designed in this extension and general Workflow activities.It's enable also to use breakpoint in test. It's developed in C#.WPF Date Range Slider: A WPF Date Range Slider user control written with C# to allow your users to choose a range of dates using a double thumbed slider control.WPMind Framework for WP7: This project is used to provide some Windows Phone 7 controls for Windows Phone 7 Silverlight developer. Please join us if you are interested in this project.

    Read the article

  • Entity Framework v1 &ndash; tips and Tricks Part 3

    - by Rohit Gupta
    General Tips on Entity Framework v1 & Linq to Entities: ToTraceString() If you need to know the underlying SQL that the EF generates for a Linq To Entities query, then use the ToTraceString() method of the ObjectQuery class. (or use LINQPAD) Note that you need to cast the LINQToEntities query to ObjectQuery before calling TotraceString() as follows: 1: string efSQL = ((ObjectQuery)from c in ctx.Contact 2: where c.Address.Any(a => a.CountryRegion == "US") 3: select c.ContactID).ToTraceString(); ================================================================================ MARS or MultipleActiveResultSet When you create a EDM Model (EDMX file) from the database using Visual Studio, it generates a connection string with the same name as the name of the EntityContainer in CSDL. In the ConnectionString so generated it sets the MultipleActiveResultSet attribute to true by default. So if you are running the following query then it streams multiple readers over the same connection: 1: using (BAEntities context = new BAEntities()) 2: { 3: var cons = 4: from con in context.Contacts 5: where con.FirstName == "Jose" 6: select con; 7: foreach (var c in cons) 8: { 9: if (c.AddDate < new System.DateTime(2007, 1, 1)) 10: { 11: c.Addresses.Load(); 12: } 13: } 14: } ================================================================================= Explicitly opening and closing EntityConnection When you call ToList() or foreach on a LINQToEntities query the EF automatically closes the connection after all the records from the query have been consumed. Thus if you need to run many LINQToEntities queries over the same connection then explicitly open and close the connection as follows: 1: using (BAEntities context = new BAEntities()) 2: { 3: context.Connection.Open(); 4: var cons = from con in context.Contacts where con.FirstName == "Jose" 5: select con; 6: var conList = cons.ToList(); 7: var allCustomers = from con in context.Contacts.OfType<Customer>() 8: select con; 9: var allcustList = allCustomers.ToList(); 10: context.Connection.Close(); 11: } ====================================================================== Dispose ObjectContext only if required After you retrieve entities using the ObjectContext and you are not explicitly disposing the ObjectContext then insure that your code does consume all the records from the LinqToEntities query by calling .ToList() or foreach statement, otherwise the the database connection will remain open and will be closed by the garbage collector when it gets to dispose the ObjectContext. Secondly if you are making updates to the entities retrieved using LinqToEntities then insure that you dont inadverdently dispose of the ObjectContext after the entities are retrieved and before calling .SaveChanges() since you need the SAME ObjectContext to keep track of changes made to the Entities (by using ObjectStateEntry objects). So if you do need to explicitly dispose of the ObjectContext do so only after calling SaveChanges() and only if you dont need to change track the entities retrieved any further. ======================================================================= SQL InjectionAttacks under control with EFv1 LinqToEntities and LinqToSQL queries are parameterized before they are sent to the DB hence they are not vulnerable to SQL Injection attacks. EntitySQL may be slightly vulnerable to attacks since it does not use parameterized queries. However since the EntitySQL demands that the query be valid Entity SQL syntax and valid native SQL syntax at the same time. So the only way one can do a SQLInjection Attack is by knowing the SSDL of the EDM Model and be able to write the correct EntitySQL (note one cannot append regular SQL since then the query wont be a valid EntitySQL syntax) and append it to a parameter. ====================================================================== Improving Performance You can convert the EntitySets and AssociationSets in a EDM Model into precompiled Views using the edmgen utility. for e.g. the Customer Entity can be converted into a precompiled view using edmgen and all LinqToEntities query against the contaxt.Customer EntitySet will use the precompiled View instead of the EntitySet itself (the same being true for relationships (EntityReference & EntityCollections of a Entity)). The advantage being that when using precompiled views the performance will be much better. The syntax for generating precompiled views for a existing EF project is : edmgen /mode:ViewGeneration /inssdl:BAModel.ssdl /incsdl:BAModel.csdl /inmsl:BAModel.msl /p:Chap14.csproj Note that this will only generate precompiled views for EntitySets and Associations and not for existing LinqToEntities queries in the project.(for that use CompiledQuery.Compile<>) Secondly if you have a LinqToEntities query that you need to run multiple times, then one should precompile the query using CompiledQuery.Compile method. The CompiledQuery.Compile<> method accepts a lamda expression as a parameter, which denotes the LinqToEntities query  that you need to precompile. The following is a example of a lamda that we can pass into the CompiledQuery.Compile() method 1: Expression<Func<BAEntities, string, IQueryable<Customer>>> expr = (BAEntities ctx1, string loc) => 2: from c in ctx1.Contacts.OfType<Customer>() 3: where c.Reservations.Any(r => r.Trip.Destination.DestinationName == loc) 4: select c; Then we call the Compile Query as follows: 1: var query = CompiledQuery.Compile<BAEntities, string, IQueryable<Customer>>(expr); 2:  3: using (BAEntities ctx = new BAEntities()) 4: { 5: var loc = "Malta"; 6: IQueryable<Customer> custs = query.Invoke(ctx, loc); 7: var custlist = custs.ToList(); 8: foreach (var item in custlist) 9: { 10: Console.WriteLine(item.FullName); 11: } 12: } Note that if you created a ObjectQuery or a Enitity SQL query instead of the LINQToEntities query, you dont need precompilation for e.g. 1: An Example of EntitySQL query : 2: string esql = "SELECT VALUE c from Contacts AS c where c is of(BAGA.Customer) and c.LastName = 'Gupta'"; 3: ObjectQuery<Customer> custs = CreateQuery<Customer>(esql); 1: An Example of ObjectQuery built using ObjectBuilder methods: 2: from c in Contacts.OfType<Customer>().Where("it.LastName == 'Gupta'") 3: select c This is since the Query plan is cached and thus the performance improves a bit, however since the ObjectQuery or EntitySQL query still needs to materialize the results into Entities hence it will take the same amount of performance hit as with LinqToEntities. However note that not ALL EntitySQL based or QueryBuilder based ObjectQuery plans are cached. So if you are in doubt always create a LinqToEntities compiled query and use that instead ============================================================ GetObjectStateEntry Versus GetObjectByKey We can get to the Entity being referenced by the ObjectStateEntry via its Entity property and there are helper methods in the ObjectStateManager (osm.TryGetObjectStateEntry) to get the ObjectStateEntry for a entity (for which we know the EntityKey). Similarly The ObjectContext has helper methods to get an Entity i.e. TryGetObjectByKey(). TryGetObjectByKey() uses GetObjectStateEntry method under the covers to find the object, however One important difference between these 2 methods is that TryGetObjectByKey queries the database if it is unable to find the object in the context, whereas TryGetObjectStateEntry only looks in the context for existing entries. It will not make a trip to the database ============================================================= POCO objects with EFv1: To create POCO objects that can be used with EFv1. We need to implement 3 key interfaces: IEntityWithKey IEntityWithRelationships IEntityWithChangeTracker Implementing IEntityWithKey is not mandatory, but if you dont then we need to explicitly provide values for the EntityKey for various functions (for e.g. the functions needed to implement IEntityWithChangeTracker and IEntityWithRelationships). Implementation of IEntityWithKey involves exposing a property named EntityKey which returns a EntityKey object. Implementation of IEntityWithChangeTracker involves implementing a method named SetChangeTracker since there can be multiple changetrackers (Object Contexts) existing in memory at the same time. 1: public void SetChangeTracker(IEntityChangeTracker changeTracker) 2: { 3: _changeTracker = changeTracker; 4: } Additionally each property in the POCO object needs to notify the changetracker (objContext) that it is updating itself by calling the EntityMemberChanged and EntityMemberChanging methods on the changeTracker. for e.g.: 1: public EntityKey EntityKey 2: { 3: get { return _entityKey; } 4: set 5: { 6: if (_changeTracker != null) 7: { 8: _changeTracker.EntityMemberChanging("EntityKey"); 9: _entityKey = value; 10: _changeTracker.EntityMemberChanged("EntityKey"); 11: } 12: else 13: _entityKey = value; 14: } 15: } 16: ===================== Custom Property ==================================== 17:  18: [EdmScalarPropertyAttribute(IsNullable = false)] 19: public System.DateTime OrderDate 20: { 21: get { return _orderDate; } 22: set 23: { 24: if (_changeTracker != null) 25: { 26: _changeTracker.EntityMemberChanging("OrderDate"); 27: _orderDate = value; 28: _changeTracker.EntityMemberChanged("OrderDate"); 29: } 30: else 31: _orderDate = value; 32: } 33: } Finally you also need to create the EntityState property as follows: 1: public EntityState EntityState 2: { 3: get { return _changeTracker.EntityState; } 4: } The IEntityWithRelationships involves creating a property that returns RelationshipManager object: 1: public RelationshipManager RelationshipManager 2: { 3: get 4: { 5: if (_relManager == null) 6: _relManager = RelationshipManager.Create(this); 7: return _relManager; 8: } 9: } ============================================================ Tip : ProviderManifestToken – change EDMX File to use SQL 2008 instead of SQL 2005 To use with SQL Server 2008, edit the EDMX file (the raw XML) changing the ProviderManifestToken in the SSDL attributes from "2005" to "2008" ============================================================= With EFv1 we cannot use Structs to replace a anonymous Type while doing projections in a LINQ to Entities query. While the same is supported with LINQToSQL, it is not with LinqToEntities. For e.g. the following is not supported with LinqToEntities since only parameterless constructors and initializers are supported in LINQ to Entities. (the same works with LINQToSQL) 1: public struct CompanyInfo 2: { 3: public int ID { get; set; } 4: public string Name { get; set; } 5: } 6: var companies = (from c in dc.Companies 7: where c.CompanyIcon == null 8: select new CompanyInfo { Name = c.CompanyName, ID = c.CompanyId }).ToList(); ;

    Read the article

  • Entity is currently read-only

    - by George Evjen
    Quick post on an issue that we were having today. This fix may just be a band-aid for another issue but this fix at least got us moving again today. Since I didn’t see anything concrete online for a solution I figured I would post this as a part one fix and then post a more detailed fix later. We were getting this error earlier today in one of our projects that uses EF and WCF Ria Services. This entity is currently read-only. One of the following conditions exist: a custom method has been invoked, a submit operation is in progress, or edit operations are not supported for the entity type. The work around that we used for this is to simply do a check to see if the property is read only. Each entity has that property on it. private void SelectedInstitution_PropertyChanged(object sender, System.ComponentModel.PropertyChangedEventArgs e)        {            if (!_personDetailContext.CurrentPerson.IsReadOnly)            {                _personDetailContext.CurrentPerson.LastUpdatedDate = DateTime.Now;            }        } We check to see if the CurrentPerson in this situation is readonly. If its not read only we go ahead and execute some other code. Again, this got us moving today, I am sure this is just step one in resolving this issue.

    Read the article

  • A Gentle .NET touch to Unix Touch

    - by lavanyadeepak
    A Gentle .NET touch to Unix Touch The Unix world has an elegant utility called 'touch' which would modify the timestamp of the file whose path is being passed an argument to  it. Unfortunately, we don't have a quick and direct such tool in Windows domain. However, just a few lines of code in C# can fill this gap to embrace and rejuvenate any file in the file system, subject to access ACL restrictions with the current timestamp.   using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; namespace LavanyaDeepak.Utilities { class Touch { static void Main(string[] args) { if (args.Length < 1) { Console.WriteLine("Please specify the path of the file to operate upon."); return; } if (!File.Exists(args[0])) { try { FileAttributes objFileAttributes = File.GetAttributes(args[0]); if ((objFileAttributes & FileAttributes.Directory) == FileAttributes.Directory) { Console.WriteLine("The input was not a regular file."); return; } } catch { } Console.WriteLine("The file does not seem to be exist."); return; } try { File.SetLastWriteTime(args[0], DateTime.Now); Console.WriteLine("The touch completed successfully"); } catch (System.UnauthorizedAccessException exUnauthException) { Console.WriteLine("Unable to touch file. Access is denied. The security manager responded: " + exUnauthException.Message); } catch (IOException exFileAccessException) { Console.WriteLine("Unable to touch file. The IO interface failed to complete request and responded: " + exFileAccessException.Message); } catch (Exception exGenericException) { Console.WriteLine("Unable to touch file. An internal error occured. The details are: " + exGenericException.Message); } } } }

    Read the article

  • SQL Authority News – Download SQL Server Data Type Conversion Chart

    - by pinaldave
    Datatypes are very important concepts of SQL Server and there are quite often need to convert them from one datatypes to another datatype. I have seen that deveoper often get confused when they have to convert the datatype. There are two important concept when it is about datatype conversion. Implicit Conversion: Implicit conversions are those conversions that occur without specifying either the CAST or CONVERT function. Explicit Conversions: Explicit conversions are those conversions that require the CAST or CONVERT function to be specified. What it means is that if you are trying to convert value from datetime2 to time or from tinyint to int, SQL Server will automatically convert (implicit conversation) for you. However, if you are attempting to convert timestamp to smalldatetime or datetime to int you will need to explicitely convert them using either CAST or CONVERT function as well appropriate parameters. Let us see a quick example of Implict Conversion and Explict Conversion. Implicit Conversion: Explicit Conversion: You can see from above example that how we need both of the types of conversion in different situation. There are so many different datatypes and it is humanly impossible to know which datatype require implicit and which require explicit conversion. Additionally there are cases when the conversion is not possible as well. Microsoft have published a chart where the grid displays various conversion possibilities as well a quick guide. Download SQL Server Data Type Conversion Chart Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Identity Claims Encoding for SharePoint

    - by Shawn Cicoria
    Just to remind myself, the list of claim types and their encodings are listed here at the bottom. http://msdn.microsoft.com/en-us/library/gg481769.aspx Where for example: i:0#.w|contoso\scicoria ‘i’ = identity, could be ‘c’ for others # == SPClaimTypes.UserLogonName . == Microsoft.IdentityModel.Claims.ClaimValueTypes.String Table for reference: Table 1. Claim types encoding Character Claim Type ! SPClaimTypes.IdentityProvider ” SPClaimTypes.UserIdentifier # SPClaimTypes.UserLogonName $ SPClaimTypes.DistributionListClaimType % SPClaimTypes.FarmId & SPClaimTypes.ProcessIdentitySID ‘ SPClaimTypes.ProcessIdentityLogonName ( SPClaimTypes.IsAuthenticated ) Microsoft.IdentityModel.Claims.ClaimTypes.PrimarySid * Microsoft.IdentityModel.Claims.ClaimTypes.PrimaryGroupSid + Microsoft.IdentityModel.Claims.ClaimTypes.GroupSid - Microsoft.IdentityModel.Claims.ClaimTypes.Role . System.IdentityModel.Claims.ClaimTypes.Anonymous / System.IdentityModel.Claims.ClaimTypes.Authentication 0 System.IdentityModel.Claims.ClaimTypes.AuthorizationDecision 1 System.IdentityModel.Claims.ClaimTypes.Country 2 System.IdentityModel.Claims.ClaimTypes.DateOfBirth 3 System.IdentityModel.Claims.ClaimTypes.DenyOnlySid 4 System.IdentityModel.Claims.ClaimTypes.Dns 5 System.IdentityModel.Claims.ClaimTypes.Email 6 System.IdentityModel.Claims.ClaimTypes.Gender 7 System.IdentityModel.Claims.ClaimTypes.GivenName 8 System.IdentityModel.Claims.ClaimTypes.Hash 9 System.IdentityModel.Claims.ClaimTypes.HomePhone < System.IdentityModel.Claims.ClaimTypes.Locality = System.IdentityModel.Claims.ClaimTypes.MobilePhone > System.IdentityModel.Claims.ClaimTypes.Name ? System.IdentityModel.Claims.ClaimTypes.NameIdentifier @ System.IdentityModel.Claims.ClaimTypes.OtherPhone [ System.IdentityModel.Claims.ClaimTypes.PostalCode \ System.IdentityModel.Claims.ClaimTypes.PPID ] System.IdentityModel.Claims.ClaimTypes.Rsa ^ System.IdentityModel.Claims.ClaimTypes.Sid _ System.IdentityModel.Claims.ClaimTypes.Spn ` System.IdentityModel.Claims.ClaimTypes.StateOrProvince a System.IdentityModel.Claims.ClaimTypes.StreetAddress b System.IdentityModel.Claims.ClaimTypes.Surname c System.IdentityModel.Claims.ClaimTypes.System d System.IdentityModel.Claims.ClaimTypes.Thumbprint e System.IdentityModel.Claims.ClaimTypes.Upn f System.IdentityModel.Claims.ClaimTypes.Uri g System.IdentityModel.Claims.ClaimTypes.Webpage Table 2. Claim value types encoding Character Claim Type ! Microsoft.IdentityModel.Claims.ClaimValueTypes.Base64Binary “ Microsoft.IdentityModel.Claims.ClaimValueTypes.Boolean # Microsoft.IdentityModel.Claims.ClaimValueTypes.Date $ Microsoft.IdentityModel.Claims.ClaimValueTypes.Datetime % Microsoft.IdentityModel.Claims.ClaimValueTypes.DaytimeDuration & Microsoft.IdentityModel.Claims.ClaimValueTypes.Double ‘ Microsoft.IdentityModel.Claims.ClaimValueTypes.DsaKeyValue ( Microsoft.IdentityModel.Claims.ClaimValueTypes.HexBinary ) Microsoft.IdentityModel.Claims.ClaimValueTypes.Integer * Microsoft.IdentityModel.Claims.ClaimValueTypes.KeyInfo + Microsoft.IdentityModel.Claims.ClaimValueTypes.Rfc822Name - Microsoft.IdentityModel.Claims.ClaimValueTypes.RsaKeyValue . Microsoft.IdentityModel.Claims.ClaimValueTypes.String / Microsoft.IdentityModel.Claims.ClaimValueTypes.Time 0 Microsoft.IdentityModel.Claims.ClaimValueTypes.X500Name 1 Microsoft.IdentityModel.Claims.ClaimValueTypes.YearMonthDuration

    Read the article

  • SQL SERVER – Capturing Wait Types and Wait Stats Information at Interval – Wait Type – Day 5 of 28

    - by pinaldave
    Earlier, I have tried to cover some important points about wait stats in detail. Here are some points that we had covered earlier. DMV related to wait stats reset when we reset SQL Server services DMV related to wait stats reset when we manually reset the wait types However, at times, there is a need of making this data persistent so that we can take a look at them later on. Sometimes, performance tuning experts do some modifications to the server and try to measure the wait stats at that point of time and after some duration. I use the following method to measure the wait stats over the time. -- Create Table CREATE TABLE [MyWaitStatTable]( [wait_type] [nvarchar](60) NOT NULL, [waiting_tasks_count] [bigint] NOT NULL, [wait_time_ms] [bigint] NOT NULL, [max_wait_time_ms] [bigint] NOT NULL, [signal_wait_time_ms] [bigint] NOT NULL, [CurrentDateTime] DATETIME NOT NULL, [Flag] INT ) GO -- Populate Table at Time 1 INSERT INTO MyWaitStatTable ([wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], [CurrentDateTime],[Flag]) SELECT [wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], GETDATE(), 1 FROM sys.dm_os_wait_stats GO ----- Desired Delay (for one hour) WAITFOR DELAY '01:00:00' -- Populate Table at Time 2 INSERT INTO MyWaitStatTable ([wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], [CurrentDateTime],[Flag]) SELECT [wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], GETDATE(), 2 FROM sys.dm_os_wait_stats GO -- Check the difference between Time 1 and Time 2 SELECT T1.wait_type, T1.wait_time_ms Original_WaitTime, T2.wait_time_ms LaterWaitTime, (T2.wait_time_ms - T1.wait_time_ms) DiffenceWaitTime FROM MyWaitStatTable T1 INNER JOIN MyWaitStatTable T2 ON T1.wait_type = T2.wait_type WHERE T2.wait_time_ms > T1.wait_time_ms AND T1.Flag = 1 AND T2.Flag = 2 ORDER BY DiffenceWaitTime DESC GO -- Clean up DROP TABLE MyWaitStatTable GO If you notice the script, I have used an additional column called flag. I use it to find out when I have captured the wait stats and then use it in my SELECT query to SELECT wait stats related to that time group. Many times, I select more than 5 or 6 different set of wait stats and I find this method very convenient to find the difference between wait stats. In a future blog post, we will talk about specific wait stats. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Entity Framework &amp; Transactions

    - by Sudheer Kumar
    There are many instances we might have to use transactions to maintain data consistency. With Entity Framework, it is a little different conceptually. Case 1 – Transaction b/w multiple SaveChanges(): here if you just use a transaction scope, then Entity Framework (EF) will use distributed transactions instead of local transactions. The reason is that, EF closes and opens the connection when ever required only, which means, it used 2 different connections for different SaveChanges() calls. To resolve this, use the following method. Here we are opening a connection explicitly so as not to span across multipel connections.   using (TransactionScope ts = new TransactionScope()) {     context.Connection.Open();     //Operation1 : context.SaveChanges();     //Operation2 :  context.SaveChanges()     //At the end close the connection     ts.Complete(); } catch (Exception ex) {       //Handle Exception } finally {       if (context.Connection.State == ConnectionState.Open)       {            context.Connection.Close();       } }   Case 2 – Transaction between DB & Non-DB operations: For example, assume that you have a table that keeps track of Emails to be sent. Here you want to update certain details like DataSent once if the mail was successfully sent by the e-mail client. Email eml = GetEmailToSend(); eml.DateSent = DateTime.Now; using (TransactionScope ts = new TransactionScope()) {    //Update DB    context.saveChanges();   //if update is successful, send the email using smtp client   smtpClient.Send();   //if send was successful, then commit   ts.Complete(); }   Here since you are dealing with a single context.SaveChanges(), you just need to use the TransactionScope, just before saving the context only.   Hope this was helpful!

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #003

    - by pinaldave
    Here is the list of curetted articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 This was the first year of my blogging and lots of new things I was learning as I go. I was indeed an infant in blogging a few years ago. However, as time passed by I have learned a lot. This year was year of experiments and new learning. 2007 Working as a full time DBA I often encoutered various errors and I started to learn how to avoid those error and document the same. ERROR Msg 5174 Each file size must be greater than or equal to 512 KB Whenever I see this error I wonder why someone is trying to attempt a database which is extremely small. Anyway, it does not matter what I think I keep on seeing this error often in industries. Anyway the solution of the error is equally interesting – just created larger database. Dilbert Humor This was very first encounter with database humor and I started to love it. It does not matter how many time we read this cartoon it does not get old. Generate Script with Data from Database – Database Publishing Wizard Generating schema script with data is one of the most frequently performed tasks among SQL Server Data Professionals. There are many ways to do the same. In the above article I demonstrated that how we can use the Database Publishing Wizard to accomplish the same. It was new to me at that time but I have not seen much of the adoption of the same still in the industry. Here is one of my videos where I demonstrate how we can generate data with schema. 2008 Delete Backup History – Cleanup Backup History Deleting backup history is important too but should be done carefully. If this is not carried out at regular interval there is good chance that MSDB will be filled up with all the old history. Every organization is different. Some would like to keep the history for 30 days and some for a year but there should be some limit. One should regularly archive the database backup history. South Asia MVP Open Days 2008 This was my very first year Microsoft MVP. I had Indeed big blast at the event and the fun was incredible. After this event I have attended many different MVP events but the fun and learning this particular event presented was amazing and just like me many others are not able to forget the same. Here are other links related to the event: South Asia MVP Open Day 2008 – Goa South Asia MVP Open Day 2008 – Goa – Day 1 South Asia MVP Open Day 2008 – Goa – Day 2 South Asia MVP Open Day 2008 – Goa – Day 3 2009 Enable or Disable Constraint  This is very simple script but I personally keep on forgetting it so I had blogged it. Till today, I keep on referencing this again and again as sometime a very little thing is hard to remember. Policy Based Management – Create, Evaluate and Fix Policies This article will cover the most spectacular feature of SQL 2008 – Policy-based management and how the configuration of SQL Server with policy-based management architecture can make a powerful difference. Policy based management is loaded with several advantages. It can help you implement various policies for reliable configuration of the system. It also provides additional administrative assistance to DBAs and helps them effortlessly manage various tasks of SQL Server across the enterprise. SQLPASS 2009 – My Very First SQPASS Experience Just Brilliant! I never had an experience such a thing in my life. SQL SQL and SQL – all around SQL! I am listing my own reasons here in order of importance to me. Networking with SQL fellows and experts Putting face to the name or avatar Learning and improving my SQL skills Understanding the structure of the largest SQL Server Professional Association Attending my favorite training sessions Since last time I have never missed a single time this event. This event is my favorite event and something keeps me going. Here are additional post related SQLPASS 2009. SQL PASS Summit, Seattle 2009 – Day 1 SQL PASS Summit, Seattle 2009 – Day 2 SQL PASS Summit, Seattle 2009 – Day 3 SQL PASS Summit, Seattle 2009 – Day 4 2010 Get All the Information of Database using sys.databases Even though we believe that we know everything about our database, we do not know a lot of things about our database. This little script enables us to know so many details about databases which we may not be familiar with. Run this on your server today and see how much you know your database. Reducing CXPACKET Wait Stats for High Transactional Database While engaging in a performance tuning consultation for a client, a situation occurred where they were facing a lot of CXPACKET Waits Stats. The client asked me if I could help them reduce this huge number of wait stats. I usually receive this kind of request from other client as well, but the important thing to understand is whether this question has any merits or benefits, or not. I discusses the same in this article – a bit long but insightful for sure. Error related to Database in Use There are so many database management operations in SQL Server which requires exclusive access to the database and it is not always possible to get it. When any database is online in SQL Server it either applications or system thread often accesses them. This means database can’t have exclusive access and the operations which required this access throws an error. There is very easy method to overcome this minor issue – a single line script can give you exclusive access to the database. Difference between DATETIME and DATETIME2 Developers have found the root reason of the problem when dealing with Date Functions – when data time values are converted (implicit or explicit) between different data types, which would lose some precision, so the result cannot match each other as expected. In this blog post I go over very interesting details and difference between DATETIME and DATETIME2 History of SQL Server Database Encryption I recently met Michael Coles and Rodeney Landrum the author of one of the kind book Expert SQL Server 2008 Encryption at SQLPASS in Seattle. During the conversation we ended up how Microsoft is evolving encryption technology. The same discussion lead to talking about history of encryption tools in SQL Server. Michale pointed me to page 18 of his book of encryption. He explicitly gave me permission to re-produce relevant part of history from his book. 2011 Functions FIRST_VALUE and LAST_VALUE with OVER clause and ORDER BY Some time an interesting feature and smart audience make a total difference in places. From last two days, I have been writing on SQL Server 2012 feature FIRST_VALUE and LAST_VALUE. I created a puzzle which was very interesting and got many people attempt to resolve it. It was based on following two articles: Introduction to FIRST_VALUE and LAST_VALUE Introduction to FIRST_VALUE and LAST_VALUE with OVER clause I even provided the hint about how one can solve this problem. The best part was many people solved the problem without using hints! Try your luck!  A Real Story of Book Getting ‘Out of Stock’ to A 25% Discount Story Available This is a great problem and everybody would love to have it. We had it and we loved it. Our book got out of stock in 48 hours of releasing and stocks were empty. We faced many issues and learned many valuable lessons. Some we were able to avoid in the future and some we are still facing it as those problems have no solutions. However, since that day – our books never gone out of stock. This inspiring learning story for us and I am confident that you will love to read it as well. Introduction to LEAD and LAG – Analytic Functions Introduced in SQL Server 2012 SQL Server 2012 introduces new analytical function LEAD() and LAG(). This function accesses data from a subsequent row (for lead) and previous row (for lag) in the same result set without the use of a self-join . It will be very difficult to explain this in words so I will attempt small example to explain you this function. I had a fantastic time writing this blog post and I am very confident when you read it, you will like the same. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >