Search Results

Search found 41366 results on 1655 pages for 'oracle database 11g expon'.

Page 1616/1655 | < Previous Page | 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623  | Next Page >

  • Vietnamese character in .NET Console Application (UTF-8)

    - by DucDigital
    Im trying to write down an UTF8 string (Vietnamese) into C# Console but no success. Im running on windows 7. I tried to use the Encoding class that convert string to char[] to byte[] and then to String, but no help, the string is input directly fron the database. Here is some example Tôi tên là Ð?c, cu?c s?ng th?t vui v? tuy?t v?i It does not show the special character like : Ð or ?... instead it show up ?, much worse with the Encoding class. Does anyone can try this out or know about this problem? Thank you My code static void Main(string[] args) { XDataContext _new = new XDataContext(); Console.OutputEncoding = Encoding.GetEncoding("UTF-8"); string srcString = _new.Posts.First().TITLE; Console.WriteLine(srcString); // Convert the UTF-16 encoded source string to UTF-8 and ASCII. byte[] utf8String = Encoding.UTF8.GetBytes(srcString); byte[] asciiString = Encoding.ASCII.GetBytes(srcString); // Write the UTF-8 and ASCII encoded byte arrays. Console.WriteLine("UTF-8 Bytes: {0}", BitConverter.ToString(utf8String)); Console.WriteLine("ASCII Bytes: {0}", BitConverter.ToString(asciiString)); // Convert UTF-8 and ASCII encoded bytes back to UTF-16 encoded // string and write. Console.WriteLine("UTF-8 Text : {0}", Encoding.UTF8.GetString(utf8String)); Console.WriteLine("ASCII Text : {0}", Encoding.ASCII.GetString(asciiString)); Console.WriteLine(Encoding.UTF8.GetString(utf8String)); Console.WriteLine(Encoding.ASCII.GetString(asciiString)); } and here is the outstanding output Nhà báo Ä‘i há»™i báo Xuân UTF-8 Bytes: 4E-68-C3-A0-20-62-C3-A1-6F-20-C4-91-69-20-68-E1-BB-99-69-20-62-C3- A1-6F-20-58-75-C3-A2-6E ASCII Bytes: 4E-68-3F-20-62-3F-6F-20-3F-69-20-68-3F-69-20-62-3F-6F-20-58-75-3F- 6E UTF-8 Text : Nhà báo Ä‘i há»™i báo Xuân ASCII Text : Nh? b?o ?i h?i b?o Xu?n Nhà báo Ä‘i há»™i báo Xuân Nh? b?o ?i h?i b?o Xu?n Press any key to continue . . .

    Read the article

  • weird behavior with acts_as_taggable_on

    - by macek
    For some reason, tags aren't showing up on a taggable object when an tagger is specified. testing the post class Post < ActiveRecord::Base acts_as_taggable_on :tags belongs_to :user end >> p = Post.first => #<Post id: 1, ...> >> p.is_taggable? => true >> p.tag_list = "foo, bar" => "foo, bar" >> p.save => true >> p.tags => [#<Tag id: 1, name: "foo">, #<Tag id: 2, name: "bar">] testing the user class User < ActiveRecord::Base acts_as_tagger has_many :posts end >> u = User.first => #<User id: 1, ...> >> u.is_tagger? => true >> u.tag(p, :with => "hello, world", :on => :tags) => true >> u.owned_tags => [#<Tag id: 3, name: "hello">, #<Tag id: 4, name: "world">] refresh the post >> p = Post.first => #<Post id: 1 ...> >> p.tags => [#<Tag id: 2, name: "bar">, #<Tag id: 1, name: "foo">] Where's the hello and world tags? Miraculously, if I modify the database directly to set tagger_id and tagger_type to NULL, the two missing tags will show up. I suspect there's something wrong with my User model? What gives?

    Read the article

  • Azure Diagnostics: The Bad, The Ugly, and a Better Way

    - by jasont
    If you’re a .Net web developer today, no doubt you’ve enjoyed watching Windows Azure grow up over the past couple of years. The platform has scaled, stabilized (mostly), and added on a slew of great (and sometimes overdue) features. What was once just an endpoint to host a solution, developers today have tremendous flexibility and options in the platform. Organizations are building new solutions and offerings on the platform, and others have, or are in the process of, migrating existing applications out of their own data centers into the Azure cloud. Whether new application development or migrating legacy, every development shop and IT organization needs to monitor their applications in the cloud, the same as they do on premises. Azure Diagnostics has some capabilities, but what I constantly hear from users is that it’s either (a) not enough, or (b) too cumbersome to set up. Today, Stackify is happy to announce that we fully support Azure deployments, just the same as your on-premises deployments. Let’s take a look below and compare and contrast the options. Azure Diagnostics Let’s crack open the Windows Azure documentation on Azure Diagnostics and see just how easy it is to use. The high level steps are:   Step 1: Import the Diagnostics Oh, I’ve already deployed my app without the diagnostics module. Guess I can’t do anything until I do this and re-deploy. Step 2: Configure the Diagnostics (and multiple sub-steps) Do I want it all? Or just pieces of it? Whoops, forgot to include a specific performance counter, I guess I’ll have to deploy again. Wait a minute… I have to specifically code these performance counters into my role’s OnStart() method, compile and deploy again? And query and consume it myself? Step 3: (Optional) Permanently store diagnostic data Lucky for me, Azure storage has gotten pretty cheap. But how often should I move the data into storage? I want to see real-time data, so I guess that’s out now as well. Step 4: (Optional) View stored diagnostic data Optional? Of course I want to see it. Conveniently, Microsoft recommends 3 tools to do this with. Un-conveniently, none of these are web based and they all just give you access to raw data, and very little charting or real-time intelligence. Just….. data. Nevermind that one product seems to have gotten stale since a recent acquisition, and doesn’t even have screenshots!   So, let’s summarize: lots of diagnostics data is available, but think realistically. Think Dev Ops. What happens when you are in the middle of a major production performance issue and you don’t have the diagnostics you need? You are redeploying an application (and thankfully you have a great branching strategy, so you feel perfectly safe just willy-nilly launching code into prod, don’t you?) to get data, then shipping it to storage, and then digging through that data to find a needle in a haystack. Would you like to be able to troubleshoot a performance issue in the middle of the night, or on a weekend, from your iPad or home computer’s web browser? Forget it: the best you get is this spark line in the Azure portal. If it’s real pointy, you probably have an issue; but since there is no alert based on a threshold your customers have likely already let you know. And high CPU, Memory, I/O, or Network doesn’t tell you anything about where the problem is. The Better Way – Stackify Stackify supports application and server monitoring in real time, all through a great web interface. All of the things that Azure Diagnostics provides, Stackify provides for your on-premises deployments, and you don’t need to know ahead of time that you’ll need it. It’s always there, it’s always on. Azure deployments are essentially no different than on-premises. It’s a Windows Server (or Linux) in the cloud. It’s behind a different firewall than your corporate servers. That’s it. Stackify can provide the same powerful tools to your Azure deployments in two simple steps. Step 1 Add a startup task to your web or worker role and deploy. If you can’t deploy and need it right now, no worries! Remote Desktop to the Azure instance and you can execute a Powershell script to download / install Stackify.   Step 2 Log in to your account at www.stackify.com and begin monitoring as much as you want, as often as you want and see the results instantly. WMI? It’s there Event Viewer? You’ve got it. File System Access? Yes, please! Would love to make sure my web.config is correct.   IIS / App Pool Info? Yep. You can even restart it. Running Services? All of them. Start and Stop them to your heart’s content. SQL Database access? You bet’cha. Alerts and Notification? Of course! You should know before your customers let you know. … and so much more.   Conclusion Microsoft has shown, consistently, that they love developers, developers, developers. What every developer needs to realize from this is that they’ve given you a canvas, which is exactly what Azure is. It’s great infrastructure that is readily available, easy to manage, and fairly cost effective. However, the tooling is your responsibility. What you get, at best, is bare bones. App and server diagnostics should be available when you need them. While we, as developers, try to plan for and think of everything ahead of time, there will come times where we need to get data that just isn’t available. And having to go through a lot of cumbersome steps to get that data, and then have to find a friendlier way to consume it…. well, that just doesn’t make a lot of sense to me. I’d rather spend my time writing and developing features and completing bug fixes for my applications, than to be writing code to monitor and diagnose.

    Read the article

  • notification scheduling question

    - by sims
    Hi Stackers! I'm building an app that needs to send out notifications to users depending on user definable "notifications". So the notifications are not per event. They are arbitrary. A cron job should query the database and send out emails when it finds an event with matching criterion. The is a scheduling app. So naturally, one of the criterion is time. I think I've figured it out, but I'm sure there are better ideas out there as it seems to be a fairly common thing to do. I think I'll limit the users ability to "minutes before hand" to get notified as opposed to seconds or hours. So I have an eventA and notificationA. notificationA should be triggered if an event is due within 45 minutes. So eventA starts at 17:30. The user should be notified at 16:45. But the cron job might not run exactly at 00 seconds. So when the difference is not 45 minutes and 0 seconds it is, say, 45 minutes and 5 seconds. Notification time is past. Email doesn't get sent. User misses event. Shugar. We should also take into account that the cron job might take a long time to run. So maybe we should only trigger it every 5 minutes. So we need a bigger interval maybe. So then my guess would be to say that: if ((eventDueTime - now notificationTimeValue - interval) && (eventDueTime - now < notificationTimeValue + interval)) sendTheFrikinNotificationAlready(); It seems kind of risky if the there are thousands of notifications to send out. I guess I could make a thread for each notification and then a thread for each event that matches the criterion. That might help. Does that make sense? Any other ideas? Thanks!

    Read the article

  • Calculations in a table of data

    - by Christian W
    I have a table of data with survey results, and I want to do certain calculations on this data. The datastructure is somewhat like this: ____________________________________________________________________________________ | group |individual | key | key | key | | | |subkey|subkey|subkey|subkey|subkey|subkey|subkey|subkey|subkey| | | |q|q|q |q |q |q|q|q |q|q|q |q |q |q|q|q |q|q|q |q |q |q|q|q | |-------|-----------|-|-|--|--|---|-|-|--|-|-|--|--|---|-|-|--|-|-|--|--|---|-|-|--| | 1 | 0001 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 | | 1 | 0002 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 | | 1 | 0003 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 | | 2 | 0004 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 | | 2 | 0005 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 | | 3 | 0006 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 | | 4 | 0007 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 | ------------------------------------------------------------------------------------ Excuse my poor ascii skills... So, every individual belongs to a group, and has answered some questions. These questions are always grouped in keys and subkeys. Is there any simple method to calculate averages, deviations and similar based on the groupings. Something like public float getAverage(int key, int individual); float avg = getAverage(5,7); I think what I'm asking is what would be the best way to structure the data in C# to make it as easy as possible to work with? I have started making classes for every entity, but I got confused somewhere and something stopped working. So before I continue along this path, I was wondering if there are any other, better, ways of doing this? (Every individual can also have describing variables, like agegroup and such, but that's not important for the base functionality.) Our current solution does all calculations inline in the queries when requesting the data from the database. This works, but it's slow and the number of queries equals questions * individuals + keys * individuals, which could be alot if individual queries. Any suggestions?

    Read the article

  • .NET C# Filestream writing to file and reading the bfile

    - by pythonrg7
    I have a web service that checks a dictionary to see if a file exists and then if it does exist it reads the file, otherwise it saves to the file. This is from a web app. I wonder what is the best way to do this because I occasionally get a FileNotFoundException exception if the same file is accessed at the same time. Here's the relevant parts of the code: String signature; signature = "FILE," + value1 + "," + value2 + "," + value3 + "," + value4; // this is going to be the filename string result; MultipleRecordset mrSummary = new MultipleRecordset(); // MultipleRecordset is an object that retrieves data from a sql server database if (mrSummary.existsFile(signature)) { result = mrSummary.retrieveFile(signature); } else { result = mrSummary.getMultipleRecordsets(System.Configuration.ConfigurationManager.ConnectionStrings["MyConnectionString"].ConnectionString.ToString(), value1, value2, value3, value4); mrSummary.saveFile(signature, result); } Here's the code to see if the file already exists: private static Dictionary dict = new Dictionary(); public bool existsFile(string signature) { if (dict.ContainsKey(signature)) { return true; } else { return false; } } Here's what I use to retrieve if it already exists: try { byte[] buffer; FileStream fileStream = new FileStream(@System.Configuration.ConfigurationManager.AppSettings["CACHEPATH"] + filename, FileMode.Open, FileAccess.Read, FileShare.Read); try { int length = 0x8000; // get file length buffer = new byte[length]; // create buffer int count; // actual number of bytes read JSONstring = ""; while ((count = fileStream.Read(buffer, 0, length)) > 0) { JSONstring += System.Text.ASCIIEncoding.ASCII.GetString(buffer, 0, count); } } finally { fileStream.Close(); } } catch (Exception e) { JSONstring = "{\"error\":\"" + e.ToString() + "\"}"; } If the file doesn't previously exist it saves the JSON to the file: try { if (dict.ContainsKey(filename) == false) { dict.Add(filename, true); } else { this.retrieveFile(filename, ipaddress); } } catch { } try { TextWriter tw = new StreamWriter(@System.Configuration.ConfigurationManager.AppSettings["CACHEPATH"] + filename); tw.WriteLine(JSONstring); tw.Close(); } catch { } Here are the details to the exception I sometimes get from running the above code: System.IO.FileNotFoundException: Could not find file 'E:\inetpub\wwwroot\cache\FILE,36,36.25,14.5,14.75'. File name: 'E:\inetpub\wwwroot\cache\FILE,36,36.25,14.5,14.75' at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy) at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share) at com.myname.business.MultipleRecordset.retrieveFile(String filename, String ipaddress)

    Read the article

  • How to create a virtual network with Azure Connect

    - by Herve Roggero
    If you are trying to establish a virtual network between machines located in disparate networks, you can either use VPN, Virtual Network or Azure Connect. If you want to establish a connection between machines located in Windows Azure, you should consider using the Virtual Network service. If you want to establish a connection between local machines and Virtual Machines in Windows Azure, you may be able to use your existing VPN device (assuming you have one), as long as the device is supported by Microsoft. If the VPN device you are using isn’t supported, or if you are trying to create a virtual network between machines from disparate networks (such as machines located in another cloud provider), you can use Azure Connect. This blog post explains how Azure Connect can help you create virtual networks between multiple servers in the cloud, various servers in different cloud environments, and on-premise. Note: Azure Connect is currently in Technical Preview. About Azure Connect Let’s do a quick review of Azure Connect. This technology implements an IPSec tunnel from machines to to a relay service located in the Microsoft cloud (Azure). So in essence, Azure Connect doesn’t provide a point-to-point connection between machines; the network communication is tunneled through the relay service. The relay service in turn offers a mechanism to enforce basic communication rules that you define through Groups. We will review this later. You could network two or more VMs in the Azure cloud (although you should consider using a Virtual Network if you go this route), or servers in the Azure cloud and other machines in the Amazon cloud for example, or even two or more on-premise servers located in different locations for which a direct network connection is not an option. You can place any number of machines in your topology. Azure Connect gives you great flexibility on how you want to build your virtual network across various environments. So Azure Connect makes sense when you want to: Connect machines located in different cloud providers Connect on-premise machines running in different locations Connect Azure VMs with on-premise (if you do not have a VPN device, or if your device is not supported) Connect Azure Roles (Worker Roles, Web Roles) with on-premise servers or in other cloud providers The diagram below shows you a high level network topology that involves machines in the Windows Azure cloud, other cloud providers and on-premise. You should note that the only required component in this diagram is the Relay itself. The other machines are optional (although your network is useful only if you have two or more machines involved). Relay agents are currently available in three geographic areas: US, Europe and Asia. You can change which region you want to use in the Windows Azure management portal. High Level Network Topology With Azure Connect Azure Connect Agent Azure Connect establishes a virtual network and creates virtual adapters on your machines; these virtual adapters communicate through the Relay using IPSec. This is achieved by installing an agent (the Azure Connect Agent) on all the machines you want in your network topology. However, you do not need to install the agent on Worker Roles and Web Roles; that’s because the agent is already installed for you. Any other machine, including Virtual Machines in Windows Azure, needs the agent installed.  To install the agent, simply go to your Windows Azure portal (http://windows.azure.com) and click on Networks on the bottom left panel. You will see a list of subscriptions under Connect. If you select a subscription, you will be able to click on the Install Local Endpoint icon on top. Clicking on this icon will begin the download and installation process for the agent. Activating Roles for Azure Connect As previously mentioned, you do not need to install the Azure Connect Agent on Worker Roles and Web Roles because it is already loaded. However, you do need to activate them if you want the roles to participate in your network topology. To do this, you will need to click on the Get Activation Token icon. The activation token must then be copied and placed in the configuration file of your roles. For more information on how to perform this step, visit MSDN at http://msdn.microsoft.com/en-us/library/windowsazure/gg432964.aspx. Firewall Rules Note that specific firewall rules must exist to allow the agent to communicate through the Relay. You will need to allow TCP 443 and ICMPv6. For additional information, please visit MSDN at http://msdn.microsoft.com/en-us/library/windowsazure/gg433061.aspx. CA Certificates You can optionally require agents to sign their activation request with the Relay using a trusted certificate issued by a Certificate Authority (CA). Click on Activation Options to learn more. Groups To create your network topology you must first create a group. A group represents a logical container of endpoints (or machines) that can communicate through the Relay. You can create multiple groups allowing you to manage network communication differently. For example you could create a DEVELOPMENT group and a PRODUCTION group. To add an endpoint you must first install an agent that will create a virtual adapter on the machine on which it is installed (as discussed in the previous section). Once you have created a group and installed the agents, the machines will appear in the Windows Azure management portal and you can start assigning machines to groups. The next figure shows you that I created a group called LocalGroup and assigned two machines (both on-premise) to that group. Groups and Computers in Azure Connect As I mentioned previously you can allow these machines to establish a network connection. To do this, you must enable the Interconnected option in the group. The following diagram shows you the definition of the group. In this topology I chose to include local machines only, but I could also add worker roles and web roles in the Azure Roles section (you must first activate your roles, as discussed previously). You could also add other Groups, allowing you to manage inter-group communication. Defining a Group in Azure Connect Testing the Connection Now that my agents have been installed on my two machines, the group defined and the Interconnected option checked, I can test the connection between my machines. The next screenshot shows you that I sent a PING request to DEVLAP02 from DEVDSK02. The PING request was successful. Note however that the time is in the hundreds of milliseconds on average. That is to be expected because the machines are connecting through the Relay located in the cloud. Going through the Relay introduces an extra hop in the communication chain, so if your systems rely on high performance, you may want to conduct some basic performance tests. Sending a PING Request Through The Relay Conclusion As you can see, creating a network topology between machines using the Azure Connect service is simple. It took me less than five minutes to create the above configuration, including the time it took to install the Azure Connect agents on the two machines. The flexibility of Azure Connect allows you to create a virtual network between disparate environments, as long as your operating systems are supported by the agent. For more information on Azure Connect, visit the MSDN website at http://msdn.microsoft.com/en-us/library/windowsazure/gg432997.aspx. About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net. Special Thanks I would like thank those that helped me figure out how Azure Connect works: Marcel Meijer - http://blogs.msmvps.com/marcelmeijer/ Michael Wood - Http://www.mvwood.com Glenn Block - http://www.codebetter.com/glennblock Yves Goeleven - http://cloudshaper.wordpress.com/ Sandrino Di Mattia - http://fabriccontroller.net/ Mike Martin - http://techmike2kx.wordpress.com

    Read the article

  • WCF web service Data Members defaulting to null

    - by James
    I am new to WCF and created a simple REST service to accept an order object (series of strings from XML file), insert that data into a database, and then return an order object that contains the results. To test the service I created a small web project and send over a stream created from an xml doc. The problem is that even though all of the items in the xml doc get placed into the stream, the service is nullifying some of them when it receives the data. For example lineItemId will have a value but shipment status will show null. I step through the xml creation and verify that all the values are being sent. However, if I clear the datamembers and change the names around, it can work. Any help would be appreciated. This is the interface code [ServiceContract] public interface IShipping { [OperationContract] [WebInvoke(Method = "POST", UriTemplate = "/Orders/UpdateOrderStatus/", BodyStyle=WebMessageBodyStyle.Bare)] ReturnOrder UpdateOrderStatus(Order order); } [DataContract] public class Order { [DataMember] public string lineItemId { get; set; } [DataMember] public string shipmentStatus { get; set; } [DataMember] public string trackingNumber { get; set; } [DataMember] public string shipmentDate { get; set; } [DataMember] public string delvryMethod { get; set; } [DataMember] public string shipmentCarrier { get; set; } } [DataContract] public class ReturnOrder { [DataMember(Name = "Result")] public string Result { get; set; } } This is what I'm using to send over an Order object: string lineId = txtLineItem.Text.Trim(); string status = txtDeliveryStatus.Text.Trim(); string TrackingNumber = "1x22-z4r32"; string theMethod = "Ground"; string carrier = "UPS"; string ShipmentDate = "04/27/2010"; XNamespace nsOrders = "http://tempuri.org/order"; XElement myDoc = new XElement(nsOrders + "Order", new XElement(nsOrders + "lineItemId", lineId), new XElement(nsOrders + "shipmentStatus", status), new XElement(nsOrders + "trackingNumber", TrackingNumber), new XElement(nsOrders + "delvryMethod", theMethod), new XElement(nsOrders + "shipmentCarrier", carrier), new XElement(nsOrders + "shipmentDate", ShipmentDate) ); HttpWebRequest request = (HttpWebRequest)WebRequest.Create("http://localhost:3587/Deposco.svc/wms/Orders/UpdateOrderStatus/"); request.Method = "POST"; request.ContentType = "application/xml"; try { request.ContentLength = myDoc.ToString().Length; StreamWriter sw = new StreamWriter(request.GetRequestStream()); sw.Write(myDoc); sw.Close(); using (HttpWebResponse response = (HttpWebResponse)request.GetResponse()) { StreamReader reader = new StreamReader(response.GetResponseStream()); string responseString = reader.ReadToEnd(); XDocument.Parse(responseString).Save(@"c:\DeposcoSvcWCF.xml"); } } catch (WebException wEx) { Stream errorStream = ((HttpWebResponse)wEx.Response).GetResponseStream(); string errorMsg = new StreamReader(errorStream).ReadToEnd(); }

    Read the article

  • NHibernate exception on query

    - by Yoav
    I'm getting a mapping exception doing the most basic query. This is my domain class: public class Project { public virtual string PK { get; set; } public virtual string Id { get; set; } public virtual string Name { get; set; } public virtual string Description { get; set; } } And the mapping class: public class ProjectMap :ClassMap<Project> { public ProjectMap() { Table("PROJECTS"); Id(x => x.PK, "PK"); Map(x => x.Id, "ID"); Map(x => x.Name, "NAME"); Map(x => x.Description, "DESCRIPTION"); } } Configuration: public ISessionFactory SessionFactory { return Fluently.Configure() .Database(MsSqlCeConfiguration.Standard.ShowSql().ConnectionString(c => c.Is("data source=" + path))) .Mappings(m => m.FluentMappings.AddFromAssemblyOf<Project>()) .BuildSessionFactory(); } And query: IList project; using (ISession session = SessionFactory.OpenSession()) { IQuery query = session.CreateQuery("from Project"); project = query.List<Project>(); } I'm getting the exception on the query line: NHibernate.Hql.Ast.ANTLR.QuerySyntaxException: Project is not mapped [from Project] at NHibernate.Hql.Ast.ANTLR.SessionFactoryHelperExtensions.RequireClassPersister(String name) at NHibernate.Hql.Ast.ANTLR.Tree.FromElementFactory.AddFromElement() at NHibernate.Hql.Ast.ANTLR.Tree.FromClause.AddFromElement(String path, IASTNode alias) at NHibernate.Hql.Ast.ANTLR.HqlSqlWalker.fromElement() at NHibernate.Hql.Ast.ANTLR.HqlSqlWalker.fromElementList() at NHibernate.Hql.Ast.ANTLR.HqlSqlWalker.fromClause() at NHibernate.Hql.Ast.ANTLR.HqlSqlWalker.unionedQuery() at NHibernate.Hql.Ast.ANTLR.HqlSqlWalker.query() at NHibernate.Hql.Ast.ANTLR.HqlSqlWalker.selectStatement() at NHibernate.Hql.Ast.ANTLR.HqlSqlWalker.statement() at NHibernate.Hql.Ast.ANTLR.HqlSqlTranslator.Translate() at NHibernate.Hql.Ast.ANTLR.QueryTranslatorImpl.Analyze(HqlParseEngine parser, String collectionRole) at NHibernate.Hql.Ast.ANTLR.QueryTranslatorImpl.DoCompile(IDictionary`2 replacements, Boolean shallow, String collectionRole) at NHibernate.Hql.Ast.ANTLR.QueryTranslatorImpl.Compile(IDictionary`2 replacements, Boolean shallow) at NHibernate.Engine.Query.HQLQueryPlan..ctor(String hql, String collectionRole, Boolean shallow, IDictionary`2 enabledFilters, ISessionFactoryImplementor factory) at NHibernate.Engine.Query.QueryPlanCache.GetHQLQueryPlan(String queryString, Boolean shallow, IDictionary`2 enabledFilters) at NHibernate.Impl.AbstractSessionImpl.GetHQLQueryPlan(String query, Boolean shallow) at NHibernate.Impl.AbstractSessionImpl.CreateQuery(String queryString) I assume something is wrong with my query.

    Read the article

  • INSERT OR IGNORE in a trigger

    - by dan04
    I have a database (for tracking email statistics) that has grown to hundreds of megabytes, and I've been looking for ways to reduce it. It seems that the main reason for the large file size is that the same strings tend to be repeated in thousands of rows. To avoid this problem, I plan to create another table for a string pool, like so: CREATE TABLE AddressLookup ( ID INTEGER PRIMARY KEY AUTOINCREMENT, Address TEXT UNIQUE ); CREATE TABLE EmailInfo ( MessageID INTEGER PRIMARY KEY AUTOINCREMENT, ToAddrRef INTEGER REFERENCES AddressLookup(ID), FromAddrRef INTEGER REFERENCES AddressLookup(ID) /* Additional columns omitted for brevity. */ ); And for convenience, a view to join these tables: CREATE VIEW EmailView AS SELECT MessageID, A1.Address AS ToAddr, A2.Address AS FromAddr FROM EmailInfo LEFT JOIN AddressLookup A1 ON (ToAddrRef = A1.ID) LEFT JOIN AddressLookup A2 ON (FromAddrRef = A2.ID); In order to be able to use this view as if it were a regular table, I've made some triggers: CREATE TRIGGER trg_id_EmailView INSTEAD OF DELETE ON EmailView BEGIN DELETE FROM EmailInfo WHERE MessageID = OLD.MessageID; END; CREATE TRIGGER trg_ii_EmailView INSTEAD OF INSERT ON EmailView BEGIN INSERT OR IGNORE INTO AddressLookup(Address) VALUES (NEW.ToAddr); INSERT OR IGNORE INTO AddressLookup(Address) VALUES (NEW.FromAddr); INSERT INTO EmailInfo SELECT NEW.MessageID, A1.ID, A2.ID FROM AddressLookup A1, AddressLookup A2 WHERE A1.Address = NEW.ToAddr AND A2.Address = NEW.FromAddr; END; CREATE TRIGGER trg_iu_EmailView INSTEAD OF UPDATE ON EmailView BEGIN UPDATE EmailInfo SET MessageID = NEW.MessageID WHERE MessageID = OLD.MessageID; REPLACE INTO EmailView SELECT NEW.MessageID, NEW.ToAddr, NEW.FromAddr; END; The problem After: INSERT OR REPLACE INTO EmailView VALUES (1, '[email protected]', '[email protected]'); INSERT OR REPLACE INTO EmailView VALUES (2, '[email protected]', '[email protected]'); The updated rows contain: MessageID ToAddr FromAddr --------- ------ -------- 1 NULL [email protected] 2 [email protected] [email protected] There's a NULL that shouldn't be there. The corresponding cell in the EmailInfo table contains an orphaned ToAddrRef value. If you do the INSERTs one at a time, you'll see that Alice's ID in the AddressLookup table changes! It appears that this behavior is documented: An ON CONFLICT clause may be specified as part of an UPDATE or INSERT action within the body of the trigger. However if an ON CONFLICT clause is specified as part of the statement causing the trigger to fire, then conflict handling policy of the outer statement is used instead. So the "REPLACE" in the top-level "INSERT OR REPLACE" statement is overriding the critical "INSERT OR IGNORE" in the trigger program. Is there a way I can make it work the way that I wanted?

    Read the article

  • How to salvage SQL server 2008 query from KILLED/ROLLBACK state?

    - by littlegreen
    I have a stored procedure that inserts batches of millions of rows, emerging from a certain query, into an SQL database. It has one parameter selecting the batch; when this parameter is omitted, it will gather a list of batches and recursively call itself, in order to iterate over batches. In (pseudo-)code, it looks something like this: CREATE PROCEDURE spProcedure AS BEGIN IF @code = 0 BEGIN ... WHILE @@Fetch_Status=0 BEGIN EXEC spProcedure @code FETCH NEXT ... INTO @code END END ELSE BEGIN -- Disable indexes ... INSERT INTO table SELECT (...) -- Enable indexes ... Now it can happen that this procedure is slow, for whatever reason: it can't get a lock, one of the indexes it uses is misdefined or disabled. In that case, I want to be able kill the procedure, truncate and recreate the resulting table, and try again. However, when I try and kill the procedure, the process frequently oozes into a KILLED/ROLLBACK state from which there seems to be no return. From Google I have learned to do an sp_lock, find the spid, and then kill it with KILL <spid>. But when I try to kill it, it tells me SPID 75: transaction rollback in progress. Estimated rollback completion: 0%. Estimated time remaining: 554 seconds. I did find a forum message hinting that another spid should be killed before the other one can start a rollback. But that didn't work for me either, plus I do not understand, why that would be the case... could it be because I am recursively calling my own stored procedure? (But it should be having the same spid, right?) In any case, my process is just sitting there, being dead, not responding to kills, and locking the table. This is very frustrating, as I want to go on developing my queries, not waiting hours on my server sitting dead while pretending to be finishing a supposed rollback. Is there some way in which I can tell the server not to store any rollback information for my query? Or not to allow any other queries to interfere with the rollback, so that it will not take so long? Or how to rewrite my query in a better way, or how kill the process successfully without restarting the server?

    Read the article

  • Windows Azure – Write, Run or Use Software

    - by BuckWoody
    Windows Azure is a platform that has you covered, whether you need to write software, run software that is already written, or Install and use “canned” software whether you or someone else wrote it. Like any platform, it’s a set of tools you can use where it makes sense to solve a problem. The primary location for Windows Azure information is located at http://windowsazure.com. You can find everything there from the development kits for writing software to pricing, licensing and tutorials on all of that. I have a few links here for learning to use Windows Azure – although it’s best if you focus not on the tools, but what you want to solve. I’ve got it broken down here into various sections, so you can quickly locate things you want to know. I’ll include resources here from Microsoft and elsewhere – I use these same resources in the Architectural Design Sessions (ADS) I do with my clients worldwide. Write Software Also called “Platform as a Service” (PaaS), Windows Azure has lots of components you can use together or separately that allow you to write software in .NET or various Open Source languages to work completely online, or in partnership with code you have on-premises or both – even if you’re using other cloud providers. Keep in mind that all of the features you see here can be used together, or independently. For instance, you might only use a Web Site, or use Storage, but you can use both together. You can access all of these components through standard REST API calls, or using our Software Development Kit’s API’s, which are a lot easier. In any case, you simply use Visual Studio, Eclipse, Cloud9 IDE, or even a text editor to write your code from a Mac, PC or Linux.  Components you can use: Azure Web Sites: Windows Azure Web Sites allow you to quickly write an deploy websites, without setting a Virtual Machine, installing a web server or configuring complex settings. They work alone, with other Windows Azure Web Sites, or with other parts of Windows Azure. Web and Worker Roles: Windows Azure Web Roles give you a full stateless computing instance with Internet Information Services (IIS) installed and configured. Windows Azure Worker Roles give you a full stateless computing instance without Information Services (IIS) installed, often used in a "Services" mode. Scale-out is achieved either manually or programmatically under your control. Storage: Windows Azure Storage types include Blobs to store raw binary data, Tables to use key/value pair data (like NoSQL data structures), Queues that allow interaction between stateless roles, and a relational SQL Server database. Other Services: Windows Azure has many other services such as a security mechanism, a Cache (memcacheD compliant), a Service Bus, a Traffic Manager and more. Once again, these features can be used with a Windows Azure project, or alone based on your needs. Various Languages: Windows Azure supports the .NET stack of languages, as well as many Open-Source languages like Java, Python, PHP, Ruby, NodeJS, C++ and more.   Use Software Also called “Software as a Service” (SaaS) this often means consumer or business-level software like Hotmail or Office 365. In other words, you simply log on, use the software, and log off – there’s nothing to install, and little to even configure. For the Information Technology professional, however, It’s not quite the same. We want software that provides services, but in a platform. That means we want things like Hadoop or other software we don’t want to have to install and configure.  Components you can use: Kits: Various software “kits” or packages are supported with just a few clicks, such as Umbraco, Wordpress, and others. Windows Azure Media Services: Windows Azure Media Services is a suite of services that allows you to upload media for encoding, processing and even streaming – or even one or more of those functions. We can add DRM and even commercials to your media if you like. Windows Azure Media Services is used to stream large events all the way down to small training videos. High Performance Computing and “Big Data”: Windows Azure allows you to scale to huge workloads using a few clicks to deploy Hadoop Clusters or the High Performance Computing (HPC) nodes, accepting HPC Jobs, Pig and Hive Jobs, and even interfacing with Microsoft Excel. Windows Azure Marketplace: Windows Azure Marketplace offers data and programs you can quickly implement and use – some free, some for-fee.   Run Software Also known as “Infrastructure as a Service” (IaaS), this offering allows you to build or simply choose a Virtual Machine to run server-based software.  Components you can use: Persistent Virtual Machines: You can choose to install Windows Server, Windows Server with Active Directory, with SQL Server, or even SharePoint from a pre-configured gallery. You can configure your own server images with standard Hyper-V technology and load them yourselves – and even bring them back when you’re done. As a new offering, we also even allow you to select various distributions of Linux – a first for Microsoft. Windows Azure Connect: You can connect your on-premises networks to Windows Azure Instances. Storage: Windows Azure Storage can be used as a remote backup, a hybrid storage location and more using software or even hardware appliances.   Decision Matrix With all of these options, you can use Windows Azure to solve just about any computing problem. It’s often hard to know when to use something on-premises, in the cloud, and what kind of service to use. I’ve used a decision matrix in the last couple of years to take a particular problem and choose the proper technology to solve it. It’s all about options – there is no “silver bullet”, whether that’s Windows Azure or any other set of functions. I take the problem, decide which particular component I want to own and control – and choose the column that has that box darkened. For instance, if I have to control the wiring for a solution (a requirement in some military and government installations), that means the “Networking” component needs to be dark, and so I select the “On Premises” column for that particular solution. If I just need the solution provided and I want no control at all, I can look as “Software as a Service” solutions. Security, Pricing, and Other Info  Security: Security is one of the first questions you should ask in any distributed computing environment. We have certification info, coding guidelines and more, even a general “Request for Information” RFI Response already created for you.   Pricing: Are there licenses? How much does this cost? Is there a way to estimate the costs in this new environment? New Features: Many new features were added to Windows Azure - a good roundup of those changes can be found here. Support: Software Support on Virtual Machines, general support.    

    Read the article

  • How to build Lucene / Solr from source code in windows environment in order to add patches

    - by Simon
    I have successfully implemented Apache’s Solr for free text searching a database driven web site build for windows platforms using Visual Studio in c#. I am trying to get a version Solr working with field collapsing (which is not in the release version). There are patches available from apache and discussions on the web of people successfully doing this for the version I am using but my problem is cannot get the build to work. I am a c# coder on windows platforms so java development is new to me. I understand I need to get the correct source code (and revision) from SVN, add the appropriate patches, then build the war file to deploy to my system. I cannot seem to get the source to build and produce the deployment code including jar (and subsequent war) files. My system is: Windows 7 Ultimate for development Visual Studio 2010 for c# / javascript development MyEclipse 8.6 / Eclipse 3.5 for the java build from source Subecplise 1.6x SVN plugin to get the source from apache’s SVN Apache Solr 1.4.1 So far I have: Found the right patches for the function I need: https://issues.apache.org/jira/browse/SOLR-236 Specifically I need to patch: field_collapsing_1.1.0.patch HTTPS //issues.apache.org/jira/secure/attachment/12357681/field_collapsing_1.1.0.patch and SOLR-236-1_4_1.patch HTTPS //issues.apache.org/jira/secure/attachment/12448216/SOLR-236-1_4_1.patch I downloaded the Lucene trunk version from the day before the patch was released (revision 958303 from 28/6/10) via subeclipse into a java package in myeclipse from: HTTPS //svn.apache.org/repos/asf/lucene/dev/trunk (Solr is the web implementation of Lucene and is in the subfolder solr/) I can apply patches to the solr directory once it has downloaded but the parent Lucene project doesn’t build the war files, copy the jar or other files into the bin folder (it stays empty). The build process starts, but doesn’t do anything apart from creating the folders bin and src. I am building the whole Lucene project, which contains Solr. I have tried building the source without patching and the same happens. If I copy out the Solr directory into a new project, it runs the build and copies all the related files, tests, etc but fails with 4,500 errors and does not produce the jar files or war file, which I assume is because it can’t find the Lucene trunk files which it depends on. I have two interrelated problems 1) I can't get the Lucene downloaded trunk to build 2) The jar, war and associated files are not created Can anyone help with what I am missing to build the war file? I have spent 2 days to get this far as the help online is extremely patchy and I can’t find a walk though tutorial on building a java war file from source in a windows environment. Any help will be much appreciated. Simon

    Read the article

  • Ajax problem not displaying data using multiple javascript calls...

    - by Ronedog
    I'm writing an app that uses ajax to retrieve data from a mysql db using php. Because of the nature of the app, the user clicks an href link that has an "onclick" event used to call the javascript/ajax. I'm retrieving the data from mysql, then calling a separate php function which creates a small html table with the necessary data in it. The new table gets passed back to the responseText and is displayed inside a div tag. The tables only have around 10-20 rows of data in them. This functionality is working fine and displays the data in html form exactly as it needs to be on the page. The problem is this. the HREF "onclick" event needs to run multiple scripts one right after the other. The first script updates the "existing" data and inside the "update_existing" function is a call to refresh a section of the page with the updated HTML from the responseText. Then when that is done a "display_html" function is called which also updates a different section of the page with it's newly created HTML table. The event looks like this: Update This string gets built dynamically using php with parameters supplied, but for this example I simply took the parameters out so it didn't get confusing. The "update_existion() function actually calls the display_html() function which updates a section of the page as needed. I need to update a different section of the page on the same click of the mouse right after the update, which is why I'm calling the display_html() again, right after it. The problem is only the last call is being updated on my screen. In other words, the 2nd function call "display_html()" executes and displays the refreshed data just fine, but the previous call to update_existing() runs and updates the database properly, but doesn't display on the screen unless I press the browsers "refresh" button, which of course displays the new data exactly how I want it to, but I don't want the users to have to press the "refresh" button. I tried adding multiple "display_html() calls one right after the other, separating all of them with the semicolon and learned that only the very last function call actually refreshed the div element on the html page with the table information, although all the previous display_html() calls worked, they couldn't be seen on the page without a refresh of the browser. Is this a problem with javascript, or the ajax call, or is this a limitation in the DOM that only allows one element to be updated at a time. The ajax call is asynchroneous, but I've tried both, only async works period. This is the same in both Firefox and Internet Explorer Any ideas what's going on and how to get around it so I can run these multiple scripts?

    Read the article

  • The Great Divorce

    - by BlackRabbitCoder
    I have a confession to make: I've been in an abusive relationship for more than 17 years now.  Yes, I am not ashamed to admit it, but I'm finally doing something about it. I met her in college, she was new and sexy and amazingly fast -- and I'd never met anything like her before.  Her style and her power captivated me and I couldn't wait to learn more about her.  I took a chance on her, and though I learned a lot from her -- and will always be grateful for my time with her -- I think it's time to move on. Her name was C++, and she so outshone my previous love, C, that any thoughts of going back evaporated in the heat of this new romance.  She promised me she'd be gentle and not hurt me the way C did.  She promised me she'd clean-up after herself better than C did.  She promised me she'd be less enigmatic and easier to keep happy than C was.  But I was deceived.  Oh sure, as far as truth goes, it wasn't a complete lie.  To some extent she was more fun, more powerful, safer, and easier to maintain.  But it just wasn't good enough -- or at least it's not good enough now. I loved C++, some part of me still does, it's my first-love of programming languages and I recognize its raw power, its blazing speed, and its improvements over its predecessor.  But with today's hardware, at speeds we could only dream to conceive of twenty years ago, that need for speed -- at the cost of all else -- has died, and that has left my feelings for C++ moribund. If I ever need to write an operating system or a device driver, then I might need that speed.  But 99% of the time I don't.  I'm a business-type programmer and chances are 90% of you are too, and even the ones who need speed at all costs may be surprised by how much you sacrifice for that.   That's not to say that I don't want my software to perform, and it's not to say that in the business world we don't care about speed or that our job is somehow less difficult or technical.  There's many times we write programs to handle millions of real-time updates or handle thousands of financial transactions or tracking trading algorithms where every second counts.  But if I choose to write my code in C++ purely for speed chances are I'll never notice the speed increase -- and equally true chances are it will be far more prone to crash and far less easy to maintain.  Nearly without fail, it's the macro-optimizations you need, not the micro-optimizations.  If I choose to write a O(n2) algorithm when I could have used a O(n) algorithm -- that can kill me.  If I choose to go to the database to load a piece of unchanging data every time instead of caching it on first load -- that too can kill me.  And if I cross the network multiple times for pieces of data instead of getting it all at once -- yes that can also kill me.  But choosing an overly powerful and dangerous mid-level language to squeeze out every last drop of performance will realistically not make stock orders process any faster, and more likely than not open up the system to more risk of crashes and resource leaks. And that's when my love for C++ began to die.  When I noticed that I didn't need that speed anymore.  That that speed was really kind of a lie.  Sure, I can be super efficient and pack bits in a byte instead of using separate boolean values.  Sure, I can use an unsigned char instead of an int.  But in the grand scheme of things it doesn't matter as much as you think it does.  The key is maintainability, and that's where C++ failed me.  I like to tell the other developers I work with that there's two levels of correctness in coding: Is it immediately correct? Will it stay correct? That is, you can hack together any piece of code and make it correct to satisfy a task at hand, but if a new developer can't come in tomorrow and make a fairly significant change to it without jeopardizing that correctness, it won't stay correct. Some people laugh at me when I say I now prefer maintainability over speed.  But that is exactly the point.  If you focus solely on speed you tend to produce code that is much harder to maintain over the long hall, and that's a load of technical debt most shops can't afford to carry and end up completely scrapping code before it's time.  When good code is written well for maintainability, though, it can be correct both now and in the future. And you know the best part is?  My new love is nearly as fast as C++, and in some cases even faster -- and better than that, I know C# will treat me right.  Her creators have poured hundreds of thousands of hours of time into making her the sexy beast she is today.  They made her easy to understand and not an enigmatic mess.  They made her consistent and not moody and amorphous.  And they made her perform as fast as I care to go by optimizing her both at compile time and a run-time. Her code is so elegant and easy on the eyes that I'm not worried where she will run to or what she'll pull behind my back.  She is powerful enough to handle all my tasks, fast enough to execute them with blazing speed, maintainable enough so that I can rely on even fairly new peers to modify my work, and rich enough to allow me to satisfy any need.  C# doesn't ask me to clean up her messes!  She cleans up after herself and she tries to make my life easier for me by taking on most of those optimization tasks C++ asked me to take upon myself.  Now, there are many of you who would say that I am the cause of my own grief, that it was my fault C++ didn't behave because I didn't pay enough attention to her.  That I alone caused the pain she inflicted on me.  And to some extent, you have a point.  But she was so high maintenance, requiring me to know every twist and turn of her vast and unrestrained power that any wrong term or bout of forgetfulness was met with painful reminders that she wasn't going to watch my back when I made a mistake.  But C#, she loves me when I'm good, and she loves me when I'm bad, and together we make beautiful code that is both fast and safe. So that's why I'm leaving C++ behind.  She says she's changing for me, but I have no interest in what C++0x may bring.  Oh, I'll still keep in touch, and maybe I'll see her now and again when she brings her problems to my door and asks for some attention -- for I always have a soft spot for her, you see.  But she's out of my house now.  I have three kids and a dog and a cat, and all require me to clean up after them, why should I have to clean up after my programming language as well?

    Read the article

  • LINQ to SQL Could not find key member. Only fails on server.

    - by Adam Carr
    I have a scenario where I am inheriting from an abstract class in my partial linq to sql auto generated class implementation. My base abstract class has an abstract property called ID which I have flagged inside my LINQ to SQL model with the instance modifier override. This works fine locally without any issues. I have also done some development on another machine and it works fine there too (both in VS2008 and using Subversion). I am running CI with TeamCity and the build succeeds and deploys as desired. The problem is when the server tries to hit the database for the first time via the LINQ to SQL data context, it generates the following error. "Could not find key member 'Id' of key 'Id' on type 'CustomType'. The key may be wrong or the field or property on 'CustomType' has changed names." I have tried changing my configuration by not implementing the Id field in my base class but this still fails. Why does it work on both of my DEV machines but not on the server? I am using LINQ to SQL in another project that runs on this server just fine. FYI: LINQ to SQL, SQL 2008, .NET 3.5, SERVER 2008, IIS 7.0 UPDATE I have gone back and added the same table a second time in the same data model but without a base class and have then displayed the results from that table and got no errors. This tells me it has something to do with my base abstract class and the need to flag a property on one of my linq to sql model classes (that belongs to a key relationship) with the instance modifier of override. No answer to this yet but am getting closer. UPDATE I have fixed my issue by simply changing my approach to my problem but I am still interested in why this doesn't work. I created a new WinSrv2008 VPC and patched it, deployed a pre-built version of my site to it and still got the same error. I now assume the issue is like what the person said here, a dependency issue with VS2008. My question is what or what? Will install VS2008 on the VPC to see if it works after that.

    Read the article

  • Evaluating points in time by months, but without referencing years in Rails

    - by MikeH
    FYI, There is some overlap in the initial description of this question with a question I asked yesterday, but the question is different. My app has users who have seasonal products. When a user selects a product, we allow him to also select the product's season. We accomplish this by letting him select a start date and an end date for each product. We're using date_select to generate two sets of drop-downs: one for the start date and one for the end date. Including years doesn't make sense for our model. So we're using the option: discard_year => true When you use discard_year => true, Rails sets a year in the database, it just doesn't appear in the views. Rails sets all the years to either 0001 or 0002 in our app. Yes, we could make it 2009 and 2010 or any other pair. But the point is that we want the months and days to function independent of a particular year. If we used 2009 and 2010, then those dates would be wrong next year because we don't expect these records to be updated every year. My problem is that we need to dynamically evaluate the availability of products based on their relationship to the current month. For example, assume it's March 15. Regardless of the year, I need a method that can tell me that a product available from October to January is not available right now. If we were using actual years, this would be pretty easy. For example, in the products model, I can do this: def is_available? (season_start.past? && season_end.future?) end I can also evaluate a start_date and an end_date against current_date However, in setup I've described above where we have arbitrary years that only make sense relative to each other, these methods don't work. For example, is_available? would return false for all my products because their end date is in the year 0001 or 0002. What I need is a method just like the ones I used as examples above, except that they evaluate against current_month instead of current_date, and past? and future months instead of years. I have no idea how to do this or whether Rails has any built in functionality that could help. I've gone through all the date and time methods/helpers in the API docs, but I'm not seeing anything equivalent to what I'm describing. Thanks.

    Read the article

  • Weblogic 10.0: SAMLSignedObject.verify() failed to validate signature value

    - by joshea
    I've been having this problem for a while and it's driving me nuts. I'm trying to create a client (in C# .NET 2.0) that will use SAML 1.1 to sign on to a WebLogic 10.0 server (i.e., a Single Sign-On scenario, using browser/post profile). The client is on a WinXP machine and the WebLogic server is on a RHEL 5 box. I based my client largely on code in the example here: http://www.codeproject.com/KB/aspnet/DotNetSamlPost.aspx (the source has a section for SAML 1.1). I set up WebLogic based on instructions for SAML Destination Site from here:http://www.oracle.com/technology/pub/articles/dev2arch/2006/12/sso-with-saml4.html I created a certificate using makecert that came with VS 2005. makecert -r -pe -n "CN=whatever" -b 01/01/2010 -e 01/01/2011 -sky exchange whatever.cer -sv whatever.pvk pvk2pfx.exe -pvk whatever.pvk -spc whatever.cer -pfx whatever.pfx Then I installed the .pfx to my personal certificate directory, and installed the .cer into the WebLogic SAML Identity Asserter V2. I read on another site that formatting the response to be readable (ie, adding whitespace) to the response after signing would cause this problem, so I tried various combinations of turning on/off .Indent XMLWriterSettings and turning on/off .PreserveWhiteSpace when loading the XML document, and none of it made any difference. I've printed the SignatureValue both before the message is is encoded/sent and after it arrives/gets decoded, and they are the same. So, to be clear: the Response appears to be formed, encoded, sent, and decoded fine (I see the full Response in the WebLogic logs). WebLogic finds the certificate I want it to use, verifies that a key was supplied, gets the signed info, and then fails to validate the signature. Code: public string createResponse(Dictionary<string, string> attributes){ ResponseType response = new ResponseType(); // Create Response response.ResponseID = "_" + Guid.NewGuid().ToString(); response.MajorVersion = "1"; response.MinorVersion = "1"; response.IssueInstant = System.DateTime.UtcNow; response.Recipient = "http://theWLServer/samlacs/acs"; StatusType status = new StatusType(); status.StatusCode = new StatusCodeType(); status.StatusCode.Value = new XmlQualifiedName("Success", "urn:oasis:names:tc:SAML:1.0:protocol"); response.Status = status; // Create Assertion AssertionType assertionType = CreateSaml11Assertion(attributes); response.Assertion = new AssertionType[] {assertionType}; //Serialize XmlSerializerNamespaces ns = new XmlSerializerNamespaces(); ns.Add("samlp", "urn:oasis:names:tc:SAML:1.0:protocol"); ns.Add("saml", "urn:oasis:names:tc:SAML:1.0:assertion"); XmlSerializer responseSerializer = new XmlSerializer(response.GetType()); StringWriter stringWriter = new StringWriter(); XmlWriterSettings settings = new XmlWriterSettings(); settings.OmitXmlDeclaration = true; settings.Indent = false;//I've tried both ways, for the fun of it settings.Encoding = Encoding.UTF8; XmlWriter responseWriter = XmlTextWriter.Create(stringWriter, settings); responseSerializer.Serialize(responseWriter, response, ns); responseWriter.Close(); string samlString = stringWriter.ToString(); stringWriter.Close(); // Sign the document XmlDocument doc = new XmlDocument(); doc.PreserveWhiteSpace = true; //also tried this both ways to no avail doc.LoadXml(samlString); X509Certificate2 cert = null; X509Store store = new X509Store(StoreName.My, StoreLocation.CurrentUser); store.Open(OpenFlags.ReadOnly); X509Certificate2Collection coll = store.Certificates.Find(X509FindType.FindBySubjectDistinguishedName, "distName", true); if (coll.Count < 1) { throw new ArgumentException("Unable to locate certificate"); } cert = coll[0]; store.Close(); //this special SignDoc just overrides a function in SignedXml so //it knows to look for ResponseID rather than ID XmlElement signature = SamlHelper.SignDoc( doc, cert, "ResponseID", response.ResponseID); doc.DocumentElement.InsertBefore(signature, doc.DocumentElement.ChildNodes[0]); // Base64Encode and URL Encode byte[] base64EncodedBytes = Encoding.UTF8.GetBytes(doc.OuterXml); string returnValue = System.Convert.ToBase64String( base64EncodedBytes); return returnValue; } private AssertionType CreateSaml11Assertion(Dictionary<string, string> attributes){ AssertionType assertion = new AssertionType(); assertion.AssertionID = "_" + Guid.NewGuid().ToString(); assertion.Issuer = "madeUpValue"; assertion.MajorVersion = "1"; assertion.MinorVersion = "1"; assertion.IssueInstant = System.DateTime.UtcNow; //Not before, not after conditions ConditionsType conditions = new ConditionsType(); conditions.NotBefore = DateTime.UtcNow; conditions.NotBeforeSpecified = true; conditions.NotOnOrAfter = DateTime.UtcNow.AddMinutes(10); conditions.NotOnOrAfterSpecified = true; //Name Identifier to be used in Saml Subject NameIdentifierType nameIdentifier = new NameIdentifierType(); nameIdentifier.NameQualifier = domain.Trim(); nameIdentifier.Value = subject.Trim(); SubjectConfirmationType subjectConfirmation = new SubjectConfirmationType(); subjectConfirmation.ConfirmationMethod = new string[] { "urn:oasis:names:tc:SAML:1.0:cm:bearer" }; // // Create some SAML subject. SubjectType samlSubject = new SubjectType(); AttributeStatementType attrStatement = new AttributeStatementType(); AuthenticationStatementType authStatement = new AuthenticationStatementType(); authStatement.AuthenticationMethod = "urn:oasis:names:tc:SAML:1.0:am:password"; authStatement.AuthenticationInstant = System.DateTime.UtcNow; samlSubject.Items = new object[] { nameIdentifier, subjectConfirmation}; attrStatement.Subject = samlSubject; authStatement.Subject = samlSubject; IPHostEntry ipEntry = Dns.GetHostEntry(System.Environment.MachineName); SubjectLocalityType subjectLocality = new SubjectLocalityType(); subjectLocality.IPAddress = ipEntry.AddressList[0].ToString(); authStatement.SubjectLocality = subjectLocality; attrStatement.Attribute = new AttributeType[attributes.Count]; int i=0; // Create SAML attributes. foreach (KeyValuePair<string, string> attribute in attributes) { AttributeType attr = new AttributeType(); attr.AttributeName = attribute.Key; attr.AttributeNamespace= domain; attr.AttributeValue = new object[] {attribute.Value}; attrStatement.Attribute[i] = attr; i++; } assertion.Conditions = conditions; assertion.Items = new StatementAbstractType[] {authStatement, attrStatement}; return assertion; } private static XmlElement SignDoc(XmlDocument doc, X509Certificate2 cert2, string referenceId, string referenceValue) { // Use our own implementation of SignedXml SamlSignedXml sig = new SamlSignedXml(doc, referenceId); // Add the key to the SignedXml xmlDocument. sig.SigningKey = cert2.PrivateKey; // Create a reference to be signed. Reference reference = new Reference(); reference.Uri= String.Empty; reference.Uri = "#" + referenceValue; // Add an enveloped transformation to the reference. XmlDsigEnvelopedSignatureTransform env = new XmlDsigEnvelopedSignatureTransform(); reference.AddTransform(env); // Add the reference to the SignedXml object. sig.AddReference(reference); // Add an RSAKeyValue KeyInfo (optional; helps recipient find key to validate). KeyInfo keyInfo = new KeyInfo(); keyInfo.AddClause(new KeyInfoX509Data(cert2)); sig.KeyInfo = keyInfo; // Compute the signature. sig.ComputeSignature(); // Get the XML representation of the signature and save // it to an XmlElement object. XmlElement xmlDigitalSignature = sig.GetXml(); return xmlDigitalSignature; } To open the page in my client app, string postData = String.Format("SAMLResponse={0}&APID=ap_00001&TARGET={1}", System.Web.HttpUtility.UrlEncode(builder.buildResponse("http://theWLServer/samlacs/acs",attributes)), "http://desiredURL"); webBrowser.Navigate("http://theWLServer/samlacs/acs", "_self", Encoding.UTF8.GetBytes(postData), "Content-Type: application/x-www-form-urlencoded");

    Read the article

  • Problem with PHP 'PHP_SELF'

    - by shinjuo
    I am having a bit of trouble. It does not seem to be a big deal, but I have created a page that the user can edit a MySQL database. Once they click submit it should process the php within the if statement and echo 1 record updated. The problem is that it does not wait to echo the statement. It just seems to ignore the way I wrote my if and display the whole page. Can anyone see where I went wrong. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title></title> </head> <body> <form action="<?php echo $_SERVER['PHP_SELF']; ?>" method="post"> <?php require("serverInfo.php"); ?> <?php $res = mysql_query("SELECT * FROM cardLists order by cardID") or die(mysql_error()); echo "<select name = 'Cards'>"; while($row=mysql_fetch_assoc($res)) { echo "<option value=\"$row[cardID]\">$row[cardID]</option>"; } echo "</select>"; ?> Amount to Add: <input type="text" name="Add" /> <input type="submit" /> </form> <?php if(isset($_POST['submit'])); { require("serverInfo.php"); mysql_query("UPDATE `cardLists` SET `AmountLeft` = `AmountLeft` + ".mysql_real_escape_string($_POST['Add'])." WHERE `cardID` = '".mysql_real_escape_string($_POST['Cards'])."'"); mysql_close($link); echo "1 record Updated"; } ?> <br /> <a href="index.php"> <input type="submit" name="main" id="main" value="Return To Main" /></a> </body> </html>

    Read the article

  • Parsing XML with VB.net.

    - by SuperFurryToad
    I'm trying to make sense of a big data dump of XML that I need to write to a database using some VB.net code. I'm looking for some help getting started with the parsing code, specifically how to access the attribute values. <Product ID="523233" UserTypeID="Property" ParentID="523232"> <Name>My Property Name</Name> <AssetCrossReference AssetID="173501" Type=" Non Print old"> </AssetCrossReference> <AssetCrossReference AssetID="554740" Type=" Non Print old"> </AssetCrossReference> <AssetCrossReference AssetID="566495" Type=" Non Print old"> </AssetCrossReference> <AssetCrossReference AssetID="553014" Type="Non Print"> </AssetCrossReference> <AssetCrossReference AssetID="553015" Type="Non Print"> </AssetCrossReference> <AssetCrossReference AssetID="553016" Type="Non Print"> </AssetCrossReference> <AssetCrossReference AssetID="553017" Type="Non Print"> </AssetCrossReference> <AssetCrossReference AssetID="553018" Type="Non Print"> </AssetCrossReference> <Values> <Value AttributeID="5115">Section of main pool</Value> <Value AttributeID="5137">114 apartments, four floors, no lifts</Value> <Value AttributeID="5170">Property location</Value> <Value AttributeID="5164">2 key</Value> <Value AttributeID="5134">A comfortable property, the apartment is set on a pine-covered hillside - a scenic and peaceful location.</Value> <Value AttributeID="5200">YYY</Value> <Value AttributeID="5148">facilities include X,Y,Z</Value> <Value AttributeID="5067">Self Catering. </Value> <Value AttributeID="5221">Frequent organised daytime activities</Value> </Values> </Product> </Product> Thanks in advance for your help.

    Read the article

  • Castle ActiveRecord - schema generation without enforcing referential integrity?

    - by Simon
    Hi all, I've just started playing with Castle active record as it seems like a gentle way into NHibernate. I really like the idea of the database schema being generate from my classes during development. I want to do something similar to the following: [ActiveRecord] public class Camera : ActiveRecordBase<Camera> { [PrimaryKey] public int CameraId {get; set;} [Property] public int CamKitId {get; set;} [Property] public string serialNo {get; set;} } [ActiveRecord] public class Tripod : ActiveRecordBase<Tripod> { [PrimaryKey] public int TripodId {get; set;} [Property] public int CamKitId {get; set;} [Property] public string serialNo {get; set;} } [ActiveRecord] public class CameraKit : ActiveRecordBase<CameraKit> { [PrimaryKey] public int CamKitId {get; set;} [Property] public string description {get; set;} [HasMany(Inverse=true, Table="Cameras", ColumnKey="CamKitId")] public IList<Camera> Cameras {get; set;} [HasMany(Inverse=true, Table="Tripods", ColumnKey="CamKitId")] public IList<Camera> Tripods {get; set;} } A camerakit should contain any number of tripods and cameras. Camera kits exist independently of cameras and tripods, but are sometimes related. The problem is, if I use createschema, this will put foreign key constraints on the Camera and Tripod tables. I don't want this, I want to be able to to set CamKitId to null on the tripod and camera tables to indicate that it is not part of a CameraKit. Is there a way to tell activerecord/nhibernate to still see it as related, without enforcing the integrity? I was thinking I could have a cameraKit record in there to indicate "no camera kit", but it seems like oeverkill. Or is my schema wrong? Am I doing something I shouldn't with an ORM? (I've not really used ORMs much) Thanks!

    Read the article

  • Bassistance Autocomplete Plugin - Search Page Replacement

    - by Dante
    Hi, i've setup an autocomplete field that searches my database which works fine: $("#autocomplete input#lookupAccount").autocomplete("lib/php/autocomplete_backend.php", { width: 300, selectFirst: false, delay: 250 }); When a user clicks on a result I want to refer them to another page depending on what they've been clicking. In the documentation I find the following: Search Page Replacement An autocomplete plugin can be used to search for a term and redirect to a page associated with a resulting item. The following is one way to achieve the redirect: var data = [ {text:'Link A', url:'/page1'}, {text:'Link B', url: '/page2'} ]; $("...").autocomplete(data, { formatItem: function(item) { return item.text; } }).result(function(event, item) { location.href = item.url; }); So i need to return the following from my PHP file : {text:'link A', url:'/page1'},... But my PHP file now returns $returnData = "<ul>"; if(isset($results)){ for($j=0; $j < count($results); $j++){ if($results[$j][0] == "account"){ if($j % 2){ $returnData .= "<li>".$results[$j][1]."<br /><i>".$results[$j][2].", ".$results[$j][3]." ".$results[$j][4]."</i></li>"; } else { $returnData .= "<li style=\"background: blue\">".$results[$j][1]."<br /><i>".$results[$j][2].", ".$results[$j][3]." ".$results[$j][4]."</i></li>"; } } else { $returnData .= "<li style=\"background: yellow\"><i>".$results[$j][1]."</i> (".$results[$j][2].")</li>"; } } $returnData .= "</ul>"; echo $returnData; } else { echo "Sorry geen resultaten!"; } So it loops through an array and returns an li value depending on what it finds in the array. How can I match that with: {text:'link A', url:'/page1'}???

    Read the article

  • All embedded databases fail to open connections

    - by rsteckly
    Hi, I'm working on a winforms desktop application that needs to store data. I made the really bad decision to try and embed a database. I've tried: SQLite VistaDB SQL Server Compact In each case, I was able to generate a Entity Framework Model over the basic schema I've created. I have an event that adds data that I've been using to test these databases. Well, I kept adding a new record using EF and finding it didn't actually insert a record. In debugging, I checked the context object to see what was happening. It turns out that it is saying "the underlying provider failed to open," or something to that effect. It was not throwing an exception, just not inserting a record. The same thing has happened for all 3 embedded databases--prompting me to get it through my dense head that there has to be something wrong with my configuration. Well, I tried to write some basic sql using a sqlconnection and sqlcommand. This time it throws an exception. In the SQL Server Compact case, it now says: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) I thought perhaps a problem was the path in app.Config. So I changed the connection string to: Note that I simplified the path away from anything that might have spaces and avoided using the Data Directory nonsense that causes problem when the debugging directory does not match the preconfigured value for the data directory. I'm running Windows 7; I thought perhaps it might be an access issue--so I tried running VS 2010 in Administrator mode. No luck. I also installed Sql Server Compact SP2, thinking this might be a bug. No luck. Anyway, I'm ready to pull my hair out. I'm on a tight deadline for this thing and didn't expect to spend the day trying to figure out what is going on.

    Read the article

  • How to get hibernate3-maven-plugin hbm2ddl to find JDBC driver?

    - by HDave
    I have a Java project I am building with Maven. I am now trying to get the hibernate3-maven-plugin to run the hbm2ddl tool to generate a schema.sql file I can use to create the database schema from my annotated domain classes. This is a JPA application that uses Hibernate as the provider. In my persistence.xml file I call out the mysql driver: <property name="hibernate.dialect" value="org.hibernate.dialect.MySQLDialect"/> <property name="hibernate.connection.driver_class" value="com.mysql.jdbc.Driver"/> When I run Maven, I see it processing all my classes, but when it goes to output the schema, I get the following error: ERROR org.hibernate.connection.DriverManagerConnectionProvider - JDBC Driver class not found: com.mysql.jdbc.Driver java.lang.ClassNotFoundException: com.mysql.jdbc.Driver I have the MySQL driver as a dependency of this module. However it seems like the hbm2ddl tool cannot find it. I would have guessed that the Maven plugin would have known to search the local Maven file repository for this driver. What gives? The relevant part of my pom.xml is this: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>hibernate3-maven-plugin</artifactId> <executions> <execution> <phase>process-classes</phase> <goals> <goal>hbm2ddl</goal> </goals> </execution> </executions> <configuration> <components> <component> <name>hbm2ddl</name> <implementation>jpaconfiguration</implementation> </component> </components> <componentProperties> <persistenceunit>my-unit</persistenceunit> </componentProperties> </configuration> </plugin>

    Read the article

  • CodePlex Daily Summary for Monday, June 02, 2014

    CodePlex Daily Summary for Monday, June 02, 2014Popular ReleasesPortable Class Library for SQLite: Portable Class Library for SQLite - 3.8.4.4: This pull request from mattleibow addresses an issue with custom function creation (define functions in C# code and invoke them from SQLite as id they where regular SQL functions). Impact: Xamarin iOSTweetinvi a friendly Twitter C# API: Tweetinvi 0.9.3.x: Timelines- Added all the parameters available from the Timeline Endpoints in Tweetinvi. - This is available for HomeTimeline, UserTimeline, MentionsTimeline // Simple query var tweets = Timeline.GetHomeTimeline(); // Create a parameter for queries with specific parameters var timelineParameter = Timeline.GenerateHomeTimelineRequestParameter(); timelineParameter.ExcludeReplies = true; timelineParameter.TrimUser = true; var tweets = Timeline.GetHomeTimeline(timelineParameter); Tweets- Add mis...Sandcastle Help File Builder: Help File Builder and Tools v2014.5.31.0: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This release completes removal of the branding transformations and implements the new VS2013 presentation style that utilizes the new lightweight website format. Several breaking cha...Tooltip Web Preview: ToolTip Web Preview: Version 1.0Database Helper: Release 1.0.0.0: First Release of Database HelperCoMaSy: CoMaSy1.0.2: !Contact Management SystemImage View Slider: Image View Slider: This is a .NET component. We create this using VB.NET. Here you can use an Image Viewer with several properties to your application form. We wish somebody to improve freely. Try this out! Author : Steven Renaldo Antony Yustinus Arjuna Purnama Putra Andre Wijaya P Martin Lidau PBK GENAP 2014 - TI UKDWAspose for Apache POI: Missing Features of Apache POI WP - v 1.1: Release contain the Missing Features in Apache POI WP SDK in Comparison with Aspose.Words for dealing with Microsoft Word. What's New ?Following Examples: Insert Picture in Word Document Insert Comments Set Page Borders Mail Merge from XML Data Source Moving the Cursor Feedback and Suggestions Many more examples are yet to come here. Keep visiting us. Raise your queries and suggest more examples via Aspose Forums or via this social coding site.SEToolbox: 01.032.014 Release 1: Added fix when loading game Textures for icons causing 'Unable to read beyond the end of the stream'. Added new Resource Report, that displays all in game resources in a concise report. Added in temp directory cleaner, to keep excess files from building up. Fixed use of colors on the windows, to work better with desktop schemes. Adding base support for multilingual resources. This will allow loading of the Space Engineers resources to show localized names, and display localized date a...ClosedXML - The easy way to OpenXML: ClosedXML 0.71.2: More memory and performance improvements. Fixed an issue with pivot table field order.Vi-AIO SearchBar: Vi – AIO Search Bar: Version 1.0Top Verses ( Ayat Emas ): Binary Top Verses: This one is the bin folder of the component. the .dll component is inside.Traditional Calendar Component: Traditional Calender Converter: Duta Wacana Christian University This file containing Traditional Calendar Component and Demo Aplication that using Traditional Calendar Component. This component made with .NET Framework 4 and the programming language is C# .SQLSetupHelper: 1.0.0.0: First Stable Version of SQL SetupComposite Iconote: Composite Iconote: This is a composite has been made by Microsoft Visual Studio 2013. Requirement: To develop this composite or use this component in your application, your computer must have .NET framework 4.5 or newer.Magick.NET: Magick.NET 6.8.9.101: Magick.NET linked with ImageMagick 6.8.9.1. Breaking changes: - Int/short Set methods of WritablePixelCollection are now unsigned. - The Q16 build no longer uses HDRI, switch to the new Q16-HDRI build if you need HDRI.fnr.exe - Find And Replace Tool: 1.7: Bug fixes Refactored logic for encoding text values to command line to handle common edge cases where find/replace operation works in GUI but not in command line Fix for bug where selection in Encoding drop down was different when generating command line in some cases. It was reported in: https://findandreplace.codeplex.com/workitem/34 Fix for "Backslash inserted before dot in replacement text" reported here: https://findandreplace.codeplex.com/discussions/541024 Fix for finding replacing...VG-Ripper & PG-Ripper: VG-Ripper 2.9.59: changes NEW: Added Support for 'GokoImage.com' links NEW: Added Support for 'ViperII.com' links NEW: Added Support for 'PixxxView.com' links NEW: Added Support for 'ImgRex.com' links NEW: Added Support for 'PixLiv.com' links NEW: Added Support for 'imgsee.me' links NEW: Added Support for 'ImgS.it' linksToolbox for Dynamics CRM 2011/2013: XrmToolBox (v1.2014.5.28): XrmToolbox improvement XrmToolBox updates (v1.2014.5.28)Fix connecting to a connection with custom authentication without saved password Tools improvement New tool!Solution Components Mover (v1.2014.5.22) Transfer solution components from one solution to another one Import/Export NN relationships (v1.2014.3.7) Allows you to import and export many to many relationships Tools updatesAttribute Bulk Updater (v1.2014.5.28) Audit Center (v1.2014.5.28) View Layout Replicator (v1.2014.5.28) Scrip...Microsoft Ajax Minifier: Microsoft Ajax Minifier 5.10: Fix for Issue #20875 - echo switch doesn't work for CSS CSS should honor the SASS source-file comments JS should allow multi-line comment directivesNew Projects[ISEN] Rendu de projet Naughty3Dogs - Pong3D: Pong3D est un jeu qui reprend le principe classique du Pong en le portant dans un environnement 3D à l'aide du langage c# et du moteur Unity3DBootstrap for MVC: Bootstrap for MVC.F. A. Q. - Najczesciej zadawane pytania: FAQForuMvc: Technifutur short projecthomework456: no.iStoody: Studies organize solution, available through app for Windows and Windows Phone.liaoliao: ???????????Price Tracker: Allows a user to track prices based on parsed emailsRoslynEval: RoslynRx Hub: Rx Hub provides server side computation which initiate by subscriber requestSharepoint Online AppCache Reset: We are an IT resource company providing Virtual IT services and custom and opensource programs for everyday needs. UnitConversionLib : Smart Unit Conversion Library in C#: Conversion of units, arithmetic operation and parsing quantities with their units on run time. Smart unit converter and conversion lib for physical quantities,

    Read the article

< Previous Page | 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623  | Next Page >