Search Results

Search found 28024 results on 1121 pages for 'sql 2014'.

Page 1064/1121 | < Previous Page | 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071  | Next Page >

  • Is there a more easy way to create a WCF/OData Data Service Query Provider?

    - by routeNpingme
    I have a simple little data model resembling the following: InventoryContext { IEnumerable<Computer> GetComputers() IEnumerable<Printer> GetPrinters() } Computer { public string ComputerName { get; set; } public string Location { get; set; } } Printer { public string PrinterName { get; set; } public string Location { get; set; } } The results come from a non-SQL source, so this data does not come from Entity Framework connected up to a database. Now I want to expose the data through a WCF OData service. The only way I've found to do that thus far is creating my own Data Service Query Provider, per this blog tutorial: http://blogs.msdn.com/alexj/archive/2010/01/04/creating-a-data-service-provider-part-1-intro.aspx ... which is great, but seems like a pretty involved undertaking. The code for the provider would be 4 times longer than my whole data model to generate all of the resource sets and property definitions. Is there something like a generic provider in between Entity Framework and writing your own data source from zero? Maybe some way to build an object data source or something, so that the magical WCF unicorns can pick up my data and ride off into the sunset without having to explicitly code the provider?

    Read the article

  • Mysql german accents not-sensitive search in full-text searches

    - by lukaszsadowski
    Let`s have a example hotels table: CREATE TABLE `hotels` ( `HotelNo` varchar(4) character set latin1 NOT NULL default '0000', `Hotel` varchar(80) character set latin1 NOT NULL default '', `City` varchar(100) character set latin1 default NULL, `CityFR` varchar(100) character set latin1 default NULL, `Region` varchar(50) character set latin1 default NULL, `RegionFR` varchar(100) character set latin1 default NULL, `Country` varchar(50) character set latin1 default NULL, `CountryFR` varchar(50) character set latin1 default NULL, `HotelText` text character set latin1, `HotelTextFR` text character set latin1, `tagsforsearch` text character set latin1, `tagsforsearchFR` text character set latin1, PRIMARY KEY (`HotelNo`), FULLTEXT KEY `fulltextHotelSearch` (`HotelNo`,`Hotel`,`City`,`CityFR`,`Region`,`RegionFR`,`Country`,`CountryFR`,`HotelText`,`HotelTextFR`,`tagsforsearch`,`tagsforsearchFR`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 COLLATE=latin1_german1_ci; In this table for example we have only one hotel with Region name = "Graubünden" (please note umlaut ü character) And now I want to achieve same search match for phrases: 'graubunden' and 'graubünden' This is simple with use of MySql built in collations in regular searches as follows: SELECT * FROM `hotels` WHERE `Region` LIKE CONVERT(_utf8 '%graubunden%' USING latin1) COLLATE latin1_german1_ci This works fine for 'graubunden' and 'graubünden' and as a result I receive proper result, but problem is when we make MySQL full text search Whats wrong with this SQL statement?: SELECT * FROM hotels WHERE MATCH (`HotelNo`,`Hotel`,`Address`,`City`,`CityFR`,`Region`,`RegionFR`,`Country`,`CountryFR`, `HotelText`, `HotelTextFR`, `tagsforsearch`, `tagsforsearchFR`) AGAINST( CONVERT('+graubunden' USING latin1) COLLATE latin1_german1_ci IN BOOLEAN MODE) ORDER BY Country ASC, Region ASC, City ASC This doesn`t return any result. Any ideas where the dog is buried ?

    Read the article

  • ManyToManyField "table exist" error on syncdb

    - by Derek Reynolds
    When I include a ModelToModelField to one of my models the following error is thrown. Traceback (most recent call last): File "manage.py", line 11, in <module> execute_manager(settings) File "/Library/Python/2.6/site-packages/django/core/management/__init__.py", line 362, in execute_manager utility.execute() File "/Library/Python/2.6/site-packages/django/core/management/__init__.py", line 303, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/Library/Python/2.6/site-packages/django/core/management/base.py", line 195, in run_from_argv self.execute(*args, **options.__dict__) File "/Library/Python/2.6/site-packages/django/core/management/base.py", line 222, in execute output = self.handle(*args, **options) File "/Library/Python/2.6/site-packages/django/core/management/base.py", line 351, in handle return self.handle_noargs(**options) File "/Library/Python/2.6/site-packages/django/core/management/commands/syncdb.py", line 93, in handle_noargs cursor.execute(statement) File "/Library/Python/2.6/site-packages/django/db/backends/util.py", line 19, in execute return self.cursor.execute(sql, params) File "/Library/Python/2.6/site-packages/django/db/backends/mysql/base.py", line 84, in execute return self.cursor.execute(query, args) File "build/bdist.macosx-10.6-universal/egg/MySQLdb/cursors.py", line 173, in execute File "build/bdist.macosx-10.6-universal/egg/MySQLdb/connections.py", line 36, in defaulterrorhandler _mysql_exceptions.OperationalError: (1050, "Table 'orders_proof_approved_associations' already exists") Field definition: approved_associations = models.ManyToManyField(Association) Everything works fine when I remove the field, and the table is no where in site. Any thoughts as to why this would happen?

    Read the article

  • Job Opportunities

    - by James
    I have a few questions about my job opportunities and I appreaciate it if people could give me some feedback on what I should have in front of me. I am graduatating from a University of Wisconsin--La Crosse this December with a degree in CS and a math minor. I have a cumulative GPA of 3.84 and a major GPA of 4.0 right now (though I still have many classes in front of me). I already have a degree from the U of Minnesota (History, 3.69 GPA) and have worked in the business world for 3+ years (working for a small company in the baseball world, doing some computer programming, statistical research, operations work, technical writing, etc.) I know Java and C well, also am comfortable with Perl. I should have a good grasp of SQL by graduation. I am looking to get a nice programming job (and will be open to moving). Anyone have any advice on things I should learn etc? Also, I would like to know what everyone thinks about my chances of landing a decent job (I realize that is subjective). Also, any ideas on salary I should be looking for (say I am working a metropolitan area). Thanks.

    Read the article

  • PostgreSQL: Rolling back a transaction within a plpgsql function?

    - by jamieb
    Coming from the MS SQL world, I tend to make heavy use of stored procedures. I'm currently writing an application uses a lot of PostgreSQL plpgsql functions. What I'd like to do is rollback all INSERTS/UPDATES contained within a particular function if I get an exception at any point within it. I was originally under the impression that each function is wrapped in it's own transaction and that an exception would automatically rollback everything. However, that doesn't seem to be the case. I'm wondering if I ought to be using savepoints in combination with exception handling instead? But I don't really understand the difference between a transaction and a savepoint to know if this is the best approach. Any advice please? CREATE OR REPLACE FUNCTION do_something( _an_input_var int ) RETURNS bool AS $$ DECLARE _a_variable int; BEGIN INSERT INTO tableA (col1, col2, col3) VALUES (0, 1, 2); INSERT INTO tableB (col1, col2, col3); VALUES (0, 1, 'whoops! not an integer'); -- The exception will cause the function to bomb, but the values -- inserted into "tableA" are not rolled back. RETURN True; END; $$ LANGUAGE plpgsql;

    Read the article

  • Persisting complex data between postbacks in ASP.NET MVC

    - by Robert Wagner
    I'm developing an ASP.NET MVC 2 application that connects to some services to do data retrieval and update. The services require that I provide the original entity along with the updated entity when updating data. This is so it can do change tracking and optimistic concurrency. The services cannot be changed. My problem is that I need to somehow store the original entity between postbacks. In WebForms, I would have used ViewState, but from what I have read, that is out for MVC. The original values do not have to be tamper proof as the services treat them as untrusted. The entities would be (max) 1k and it is an intranet app. The options I have come up are: Session - Ruled out - Store the entity in the Session, but I don't like this idea as there are no plans to share session between URL - Ruled out - Data is too big HiddenField - Store the serialized entity in a hidden field, perhaps with encryption/encoding HiddenVersion - The entities have a (SQL) version field on them, which I could put into a hidden field. Then on a save I get "original" entity from the services and compare the versions, doing my own optimistic concurrency. Cookies - Like 3 or 4, but using a cookie instead of a hidden field I'm leaning towards option 4, although 3 would be simpler. Are these valid options or am I going down the wrong track? Is there a better way of doing this?

    Read the article

  • How to import data to SAP

    - by Mehmet AVSAR
    Hi, As a complete stranger in town of SAP, I want to transfer my own application's (mobile salesforce automation) data to SAP. My application has records of customers, stocks, inventory, invoices (and waybills), cheques, payments, collections, stock transfer data etc. I have an additional database which holds matchings of records. ie. A customer with ID 345 in my application has key 120-035-0223 in SAP. Every record, for sure, has to know it's counterpart, including parameters. After searching Google and SAP help site for a day, I covered that it's going to be a bit more pain than I expected. Especially SAP site does not give even a clue on it. Say I couldn't find. We transferred our data to some other ERP systems, some of which wanted XML files, some other exposed their APIs. My point is, is Sql Server's SSIS an option for me? I hope it is, so I can fight on my own territory. Since client requests would vary a lot, I count flexibility as most important criteria. Also, I want to transfer as much data as I could. Any help is appreciated. Regards,

    Read the article

  • Hopping from a C++ to a Perl/Unix job

    - by rocknroll
    Hi all, I have been a C++ / Linux Developer till now and I am adept in this stack. Of late I have been getting opportunities that require Perl, Unix (with knowledge of C++,shell scripting) expertise. Organizations are showing interest even though I don't have much scripting experience to boast off. The role is more in a Support, maintenance project involving SQL as well. Off late I am in a fix whether to forgo these offers or not. I don't know the dynamics of an IT organization and thus on one hand I fear that my C++ experience will be nullified and on the positive side I am getting to work on a new technology stack which will only add to my skill set. I am sure, most of you at some point of time have encountered such dilemmas and would have taken some decision. I want you to share your perspectives on such a scenario where a person is required to change his/her technology stack when changing his/her job. What are the merits and demerits in going with either of the choices? Also I know that C++ isn't going anywhere in the near future. What about perl? I have no clue as to what the future holds for perl developer? Whether there are enough opportunities for a perl developer? I am asking this question here because most of my fellow programmers face this career choice dilemma. Thanks.

    Read the article

  • Slow MySQL Query Breaking my back!

    - by Chris n
    so, I have tried everything I can think of, and can't get this query to happen in less than 3 seconds on my local server. I know the problem has to do with the OR referencing both the owner_id and the person_id. if I run one or the other it happens instantly, but together with an or I can't seem to make it work - I looked into rewriting the code, but the way the app was designed it won't be easy. is there a way I can call an equivalent or that won't take so long? here is the sql: SELECT event_types.name as event_type_name,event_types.id as id, count(events.id) as count,sum(events.estimated_duration) as time_sum FROM events,event_types WHERE event_types.id = events.event_type_id AND events.event_type_id != '4' AND ( events.status!='cancelled') AND events.event_type_id != 64 AND ( events.owner_id = 161 OR events.person_id = 161 ) GROUP BY event_types.name ORDER BY event_types.name DESC; Here's the Explain soup, although I'm guessing it's unnecessary cause there is probably a better way to structure that or that is obvious: thanks so much! chris. +----+-------------+-------------+-------+---------------------------------------------------------------------------------------------------------+-------------------------------+---------+-------------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------------+-------+---------------------------------------------------------------------------------------------------------+-------------------------------+-- | 1 | SIMPLE | event_types | range | PRIMARY | PRIMARY | 4 | NULL | 78 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | events | ref | index_events_on_status,index_events_on_event_type_id,index_events_on_person_id,index_events_on_owner_id | index_events_on_event_type_id | 5 | thenumber_production.event_types.id | 907 | Using where | +----+-------------+-------------+-------+---------------------------------------------------------------------------------------------------------+-------------------------------+---------+-------------------------------------+------+----------------------------------------------+

    Read the article

  • ai: Determining what tests to run to get most useful data

    - by Sai Emrys
    This is for http://cssfingerprint.com I have a system (see about page on site for details) where: I need to output a ranked list, with confidences, of categories that match a particular feature vector the binary feature vectors are a list of site IDs & whether this session detected a hit feature vectors are, for a given categorization, somewhat noisy (sites will decay out of history, and people will visit sites they don't normally visit) categories are a large, non-closed set (user IDs) my total feature space is approximately 50 million items (URLs) for any given test, I can only query approx. 0.2% of that space I can only make the decision of what to query, based on results so far, ~10-30 times, and must do so in <~100ms (though I can take much longer to do post-processing, relevant aggregation, etc) getting the AI's probability ranking of categories based on results so far is mildly expensive; ideally the decision will depend mostly on a few cheap sql queries I have training data that can say authoritatively that any two feature vectors are the same category but not that they are different (people sometimes forget their codes and use new ones, thereby making a new user id) I need an algorithm to determine what features (sites) are most likely to have a high ROI to query (i.e. to better discriminate between plausible-so-far categories [users], and to increase certainty that it's any given one). This needs to take into balance exploitation (test based on prior test data) and exploration (test stuff that's not been tested enough to find out how it performs). There's another question that deals with a priori ranking; this one is specifically about a posteriori ranking based on results gathered so far. Right now, I have little enough data that I can just always test everything that anyone else has ever gotten a hit for, but eventually that won't be the case, at which point this problem will need to be solved. I imagine that this is a fairly standard problem in AI - having a cheap heuristic for what expensive queries to make - but it wasn't covered in my AI class, so I don't actually know whether there's a standard answer. So, relevant reading that's not too math-heavy would be helpful, as well as suggestions for particular algorithms. What's a good way to approach this problem?

    Read the article

  • Using EclipseLink

    - by Ross Peoples
    I am still new to Java and Eclipse and I'm trying to get my application to connect to a database. I think I want to use EclipseLink, but all of the documentation on the matter assumes you already know everything there is to know about everything. I keep getting linked back to this tutorial: http://www.vogella.de/articles/JavaPersistenceAPI/article.html But it's basically useless because it doesn't tell you HOW to do anything. For the Installation section, it tells you to download EclipseLink and gives you a link to the download page, but doesn't tell you what to do with it after you download. The download page doesn't either. I used the "Install new software" option in Eclipse to install EclipseLink into Eclipse, but it gave me like 4 different options, none of which are explained anywhere. It gave me options JPA, MOXy, SDO, etc, but I don't know which one I need. I just installed them all. Everything on the web assumes you are already a Java guru and things that are second nature to Java devs are never explained, so it's very frustrating for someone trying to learn. So how do I install and USE EclipseLink in my project and what do I need to do to connect it to a Microsoft SQL server? Again, I am new to all of this so I have no clue what to do. Thanks for the help.

    Read the article

  • Should I be relying on WebTests for data validation?

    - by Alexander Kahoun
    I have a suite of web tests created for a web service. I use it for testing a particular input method that updates a SQL Database. The web service doesn't have a way to retrieve the data, that's not its purpose, only to update it. I have a validator that validates the response XML that the web service generates for each request. All that works fine. It was suggested by a teammate that I add data validation so that I check the database to see the data after the initial response validator runs and compare it with what was in the input request. We have a number of services and libraries that are separate from the web service I'm testing that I can use to get the data and compare it. The problem is that when I run the web test the data validation always fails even when the request succeeds. I've tried putting the thread to sleep between the response validation and the data validation but to no avail; It always gets the data from before the response validation. I can set a break point and visually see that the data has been updated in the DB, funny thing is when I step through it in debug with the breakpoint it does validate successfully. Before I get too much more into this issue I have to ask; Is this the purpose of web tests? Should I be able to validate data through service calls in this manner or am I asking too much of a web test and the response validation is as far as I should go?

    Read the article

  • Should the entity framework + self tracking entities be saving me time

    - by sipwiz
    I've been using the entity framework in combination with the self tracking entity code generation templates for my latest silverlight to WCF application. It's the first time I've used the entity framework in a real project and my hope was that I would save myself a lot of time and effort by being able to automatically update the whole data access layer of my project when my database schema changed. Happily I've found that to be the case, updating my database schema by adding a new table, changing column names, adding new columns etc. etc. can be propagated to my business object classes by using the update from database option on the entity framework model. Where I'm hurting is the CRUD operations within my WCF service in response to actions on my Silverlight client. I use the same self tracking entity framework business objects in my Silverlight app but I find I'm continually having to fight against problems such as foreign key associations not being handled correctly when updating an object or the change tracker getting confused about the state of an object at the Silverlight end and the data access operation within the WCF layer throwing a wobbly. It's got to a point where I have now spent more time dealing with this quirks than I have on my previous project where I used Linq-to-SQL as the starting point for rolling my own business objects. Is it just me being hopeless or is the self tracking entities approach something that should be avoided until it's more mature?

    Read the article

  • Having problems with sqlDataReader

    - by Anthony
    I am using a sqlDataReader to get data and set it to session variables. The problem is it doesn't want to work with expressions. I can reference any other column in the table, but not the expressions. The SQL does work. The code is below. Thanks in advance, Anthony Using myConnectionCheck As New SqlConnection(myConnectionString) Dim myCommandCheck As New SqlCommand() myCommandCheck.Connection = myConnectionCheck myCommandCheck.CommandText = "SELECT Projects.Pro_Ver, Projects.Pro_Name, Projects.TL_Num, Projects.LP_Num, Projects.Dev_Num, Projects.Val_Num, Projects.Completed, Flow.Initiate_Date, Flow.Requirements, Flow.Req_Date, Flow.Dev_Review, Flow.Dev_Review_Date, Flow.Interface, Flow.Interface_Date, Flow.Approval, Flow.Approval_Date, Flow.Test_Plan, Flow.Test_Plan_Date, Flow.Dev_Start, Flow.Dev_Start_Date, Flow.Val_Start, Flow.Val_Start_Date, Flow.Val_Complete, Flow.Val_Complete_Date, Flow.Stage_Production, Flow.Stage_Production_Date, Flow.MKS, Flow.MKS_Date, Flow.DIET, Flow.DIET_Date, Flow.Closed, Flow.Closed_Date, Flow.Dev_End, Flow.Dev_End_Date, Users_1.Email AS Expr1, Users_2.Email AS Expr2, Users_3.Email AS Expr3, Users_4.Email AS Expr4, Users_4.FNAME, Users_3.FNAME AS Expr5, Users_2.FNAME AS Expr6, Users_1.FNAME AS Expr7 FROM Projects INNER JOIN Users AS Users_1 ON Projects.TL_Num = Users_1.PIN INNER JOIN Users AS Users_2 ON Projects.LP_Num = Users_2.PIN INNER JOIN Users AS Users_3 ON Projects.Dev_Num = Users_3.PIN INNER JOIN Users AS Users_4 ON Projects.Val_Num = Users_4.PIN INNER JOIN Flow ON Projects.id = Flow.Flow_Pro_Num WHERE id = " myCommandCheck.CommandText += QSid myConnectionCheck.Open() myCommandCheck.ExecuteNonQuery() Dim count As Int16 = myCommandCheck.ExecuteScalar If count = 1 Then Dim myDataReader As SqlDataReader myDataReader = myCommandCheck.ExecuteReader() While myDataReader.Read() Session("TL_email") = myDataReader("Expr1").ToString() Session("PE_email") = myDataReader("Expr2").ToString() Session("DEV_email") = myDataReader("Expr3").ToString() Session("VAL_email") = myDataReader("Expr4").ToString() Session("Project_Name") = myDataReader("Pro_Name").ToString() End While myDataReader.Close() End If End Using

    Read the article

  • Access SSAS cube from across domains without direct database connection

    - by SuperKing
    Hello, I'm working with SQL Server Analysis Services for the first time and have the dilemma of working on a project in which users must be able to access SSAS Cubes (via a custom web dashboard) that live across different servers and domains, but without having access to the other server's SSAS database connection strings. So Organization A and Organization B will have their own cubes on their own servers, but Organization A users must be able to view Organization B's cubes, and Organization B users must be able to view Organization A's cubes, but neither organization should have access to the connection string. I've read about allowing HTTP access to the SSAS server and cube from the link below, but that requires setting up users for authentication or allowing anonymous access to one organization's server for users of another organization, and I'm not sure this would be acceptable for this situation, or if this is the preferred way to do this. Is performance acceptable here? http://technet.microsoft.com/en-us/library/cc917711.aspx I also wonder if perhaps it makes sense to run a nightly/weekly process that accesses the other organization's SSAS database via a web service or something, and pull that data into a database on the organization's server, and then rebuild the cube. Then that cube would be queried without having to go and connect to the other organization server when viewing the cube. Has anyone else attempted to accomplish something similar? Is HTTP access the standard way to go for this? Or any other possible options? Thanks, and please let me know if you need more info, still unclear on how some of this works.

    Read the article

  • Development environment for ASP.NET with EpiServer

    - by Binary255
    At our company we are going to develop more for the Windows platform than we have done up until now. As this scale of Windows development is new to us it would be nice with some feedback from experienced developers. Requirements we have: 5 developers from the beginning. 15 developers a year from now. All developers should be able to develop at the same time. Be able to develop solution for ASP.NET and EpiServer 5. Our idea: A shared server which developers use for development through Terminal Services. SQL Server Express. Start with some free express edition of Visual Studio, upgrade to a commercial version if we need the additional features. Use IIS and not the web server built into Visual Studio. Questions: Are we on the right track? In terms of license costs the above should be cheapest, right? What do you think about multiple developers doing development using a shared TS-server? Do you know of any company which has a similar development environment? Are we going to miss some features of the full Visual Studio version immediately? Is using Express version a bad choice? Is IIS the best choice? If use IIS the developers may use the same port for deployment. If we use the built in web server each one has to set their own port as we're sharing a machine. Comment answer: We are thinking about a shared server as it will most likely decrease the license costs. So it's purely a cost issue. We are using CVS for version control. Our situation is that we develop on Mac and Linux, that's why buying 1 server license + Visual Studio licenses seems to be a cost effective way of starting this type of development.

    Read the article

  • Seeking reporting or templating tool to generate large formatted PDF reports from dataset

    - by Mr. Tacos
    Say I have some data in MySQL or a big ole CSV file. I also have a report. It's a PDF, call it 100 pages long. I need to generate variations on this PDF for slices of the data. More specific example: I have a CSV file with each StackOverflow user in a row and each column contains various statistics about that user. I have a report called "Your StackOverflow Performance". Its got lots of text, always the same, but each section contains something like: "You Vs. The Average StackOverflow Poster on this metric". I want a table that appears there that has the average data, which is the same in every run of the PDF, in one column. In the second column, I want your data, which is different for each PDF/row in the CSV file/user of StackOverflow. I'm pretty sure people use things like Crystal for this? Is there something in MS SQL Server that's good for this? An open source template language? I'm not even really sure if what I need is called a 'reporting' tool (since I don't really need to do any crunching, the data in this case is being crunched by a series of scripts and SPSS, I don't need bands and subbands and so on) or 'templating'. Is there even such a thing as templating PDFs? Natch, I'd be fine with something that generates output easily scriptable to PDF, like eps, but not something like HTML. The report formatting is fussy and done and externally determined and handed down from on high. It's print-oriented, not webby. Thanks in advance.

    Read the article

  • How can I store HTML in a Doctrine YML fixture

    - by argibson
    I am working with a CMS-type site in Symfony 1.4 (Doctrine 1.2) and one of the things that is frustrating me is not being able to store HTML pages in YML fixtures. Instead I have to create SQL backups of the data if I want to drop and rebuild which is a bit of a pest when Symfony/Doctrine has a fantastic mechanism for doing exactly this. I could write a mechanism that reads in a set of HTML files for each page and fills the data in that way (or even write it as a task). But before I go down that road I am wondering if there is any way for HTML to be stored in a YML fixture so that Doctrine can simply import it into the database. Update: I have tried using symfony doctrine:data-dump and symfony doctrine:data-load however despite the dump correctly creating the fixture with the HTML, the load task appears to 'skip' the value of the column with the HTML and enters everything else into the row. In the database the field doesn't show up as 'NULL' but rather empty so I believe Doctrine is adding the value of the column as ''. Below is a sample of the YML fixture that symfony doctrine:data-dump created. I have tried running symfony doctrine:data-load against various forms of this including removing all the escaped characters (new lines and quotes leaving only angle brackets) but it still doesn't work. Product_69: name: 'My Product' Developer: Developer_30 tagline: 'Text that briefly describes the product' version: '2008' first_published: '' price_code: A79 summary: '' box_image: '' description: "<div id=\"featureSlider\">\n <ul class=\"slider\">\n <li class=\"sliderItem\" title=\"Summary\">\n <div class=\"feature\">\n Some text goes in here</div>\n </li>\n </ul>\n </div>\n" is_visible: true

    Read the article

  • MissingMethodException thrown when calling new form in Compact Framework

    - by Boerema
    I'm updating an old mobile device application for better flexibility. I had basically added the ability to configure the address of our SQL server in the case that we want to use our test server as opposed to our production server. I don't think this is causing the problem, but I wanted to state it. I also upgraded the project from a VS 2000 project to a VS 2005 project. The issue I am having is that when I try to run the program in the VS emulator for Pocket PC, I get an error. It occurs after our "main menu" form loads and the user selects the next form to load. The form is initialized without issue, but when we try to run the .ShowDialog() method, it throws a System.MissingMethodException. I don't have a lot of experience with the Compact Framework and really have no idea where to start looking for problems. I stepped the debugger through the entire initializing process for the new form and it ran without issue. But, again, when we come to the ShowDialog call, it throws the error. Any ideas in where to start looking or known issues would be greatly appreciated.

    Read the article

  • Redirecting users on select from autocomplete?

    - by juno-2
    Hi, i'm trying to implement the jquery autocomplete plugin. I've got it up and running, but something is not working properly. Basically i have a autocomplete-list of employees. The list is generated from a table in a sql-database (employee_names and employee_ID), using a VB.NET handler (.ashx file). The data is formatted as: employee_name-employee_ID. So far so good and all employees are listed in autocomplete. The problem is that i don't know how to redirect a user to a certain page (for example employee_profile.aspx) when they've selected an employee from autocomplete. This is my redirect-code, but it ain't working like it should: $('#fname2').result(function(event, data, formatted) { location.href = "employee_profile.aspx?id=" + data }); For example; a user selects It will redirect a user to employee_profile.aspx?id=name of employee-id of employee (for example: employee_profile.aspx?id=John Doe-91210) instead of employee_profile.aspx?id=91210. I know i can strip the employee_ID with: formatResult: function(data, value) { return value.split("-")[1]; } }); But i do not know how to pass that employee_ID to the redirect-page.. Here my whole code: $().ready(function() { $("#fname2").autocomplete("AutocompleteData.ashx", { minChars: 3, selectFirst: false, formatItem: function(data, i, n, value) { return value.split("-")[0]; }, //Not used, just for splitting employee_ID //formatResult: function(data, value) { // return value.split("-")[1]; //} }); $('#fname2').result(function(event, data, formatted) { location.href = "employee_profile.aspx?id=" + data }); }); I know i'm very close and it should be something really simple, but can anyone help me out?

    Read the article

  • CQRS - The query side

    - by mattcodes
    A lot of the blogsphere articles related to CQRS (command query repsonsibility) seperation seem to imply that all screens/viewmodels are flat. e.g. Name, Age, Location Of Birth etc.. and thus the suggestion that implementation wise we stick them into fast read source etc.. single table per view mySQL etc.. and pull them out with something like primitive SqlDataReader, kick that nasty nhibernate ORM etc.. However, whilst I agree that domain models dont mapped well to most screens, many of the screens that I work with are more dimensional, and Im sure this is pretty common in LOB apps. So my question is how are people handling screen where by for example it displays a summary of customer details and then a list of their orders with a [more detail] link etc.... I thought about keeping with the straight forward SQL query to the Query Database breaking off the outer join so can build a suitable ViewModel to View but it seems like overkill? Alternatively (this is starting to feel yuck) in CustomerSummaryView table have a text/big (whatever the type is in your DB) column called Orders, and the columns for the Order summary screen grid are seperated by , and rows by |. Even with XML datatype it still feeel dirty. Any thoughts on an optimal practice?

    Read the article

  • structured vs. unstructured data in db

    - by Igor
    the question is one of design. i'm gathering a big chunk of performance data with lots of key-value pairs. pretty much everything in /proc/cpuinfo, /proc/meminfo/, /proc/loadavg, plus a bunch of other stuff, from several hundred hosts. right now, i just need to display the latest chunk of data in my UI. i will probably end up doing some analysis of the data gathered to figure out performance problems down the road, but this is a new application so i'm not sure what exactly i'm looking for performance-wise just yet. i could structure the data in the db -- have a column for each key i'm gathering. the table would end up being O(100) columns wide, it would be a pain to put into the db, i would have to add new columns if i start gathering a new stat. but it would be easy to sort/analyze the data just using SQL. or i could just dump my unstructured data blob into the table. maybe three columns -- host id, timestamp, and a serialized version of my array, probably using JSON in a TEXT field. which should I do? am i going to be sorry if i go with the unstructured approach? when doing analysis, should i just convert the fields i'm interested in and create a new, more structured table? what are the trade-offs i'm missing here?

    Read the article

  • General N-Tier Architecture Question

    - by whatispunk
    In an N-Tier app you're supposed to have a business logic layer and a data access layer. Is it bad to simply have two assemblies: BusinessLogicLayer.dll and DataAccessLayer.dll to handle all this logic? How do you actually represent these layers. It seems silly, the way I've seen it, to have a BusinessLogic class library containing classes like: CustomerBusinessLogic.cs, OrderBusinessLogic.cs, etc. each calling their appropriately named cousin in the DataAccessLayer class library, i.e. CustomerDataAccess.cs, OrderDataAccess.cs. I want to create a web app using MVP and it doesn't seem so cut and dry as this. There are lots of opinions about where the business logic is supposed to be put in MVP and I'm not sure I've found a really great answer yet. I want this project to be easily testable, and I am trying to adhere to TDD methodologies as best I can. I intend to use MSTest and Rhino Mocks for testing. I was thinking of something like the following for my architecture: I'd use LINQ-To-SQL to talk to the database. WCF services to define data contract interfaces for the business logic layer. Then use MVP with ASP.NET Forms for the UI/BLL. Now, this isn't the start of this project, most of the LINQ stuff is already done, so its stuck. The WCF service would replace the existing DataAccessLayer assembly and the UI/BLL would replace the BusinessLogicLayer assembly etc. This sort of makes sense in my head, but its getting really late. Anyone that's traveled down this path have any guidance? Good links? Warnings? Thanks!

    Read the article

  • Locking issues with replacing files on a website

    - by Moe Sisko
    I want to replace existing files on an IIS website with updated versions. Say these files are large pdf documents, which can be accessed via hyperlinks. The site is up 24x7, so I'm concerned about locking issues when a file is being updated at exactly the same time that someone is trying to read the file. The files are updated using C# code run on the server. I can think of two options for opening the file for writing. Option 1) Open the file for writing, using FileShare.Read : using (FileStream stream = new FileStream(path, FileMode.Create, FileAccess.Write, FileShare.Read)) While this file is open, and a user requests the same file for reading in a web browser via a hyperlink, the document opens up as a blank page. Option 2) Open the file for writing using FileShare.None : using (FileStream stream = new FileStream(path, FileMode.Create, FileAccess.Write, FileShare.None)) While this file is open, and a user requests the same file for reading in a web browser via a hyperlink, the browser shows an error. In IE 8, you get HTTP 500, "The website cannot display the page", and in Firefox 3.5, you get : "The process cannot access the file because it is being used by another process." The browser behaviour kind of makes sense, and seem reasonable. I guess its highly unlikely that a user will attempt to read a file at exactly the same time you are updating it. It would be nice if somehow, the file update was atomic, like updating a database with SQL wrapped around a transaction. I'm wondering if you guys worry about this sort of thing, and prefer either of the above options, or even have other options of your own for updating files.

    Read the article

  • Adding a guideline to the editor in Visual Studio

    - by xsl
    Introduction I've always been searching for a way to make Visual Studio draw a line after a certain amount of characters: Below is a guide to enable these so called guidelines for various versions of Visual Studio. Visual Studio 2010 Install Paul Harrington's Editor Guidelines extension. Open the registry at HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\10.0\Text Editor and add a new string called Guides with the value RGB(100,100,100), 80. The first part specifies the color, while the other one (80) is the column the line will be displayed. Or install the Guidelines UI extension, which will add entries to the editor's context menu for adding/removing the entries without needing to edit the registry directly. The current disadvantage of this method is that you can't specify the column directly. Visual Studio 2008 and Other Versions If you are using Visual Studio 2008 open the registry at HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\9.0\Text Editor and add a new string called Guides with the value RGB(100,100,100), 80. The first part specifies the color, while the other one (80) is the column the line will be displayed. The vertical line will appear, when you restart Visual Studio. This trick also works for various other version of Visual Studio, as long as you use the correct path: 2003: HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\7.1\Text Editor 2005: HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\8.0\Text Editor 2008: HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\9.0\Text Editor 2008 Express: HKEY_CURRENT_USER\Software\Microsoft\VCExpress\9.0\Text Editor This also works in SQL Server 2005 and probably other versions.

    Read the article

< Previous Page | 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071  | Next Page >