Search Results

Search found 28024 results on 1121 pages for 'sql 2014'.

Page 1090/1121 | < Previous Page | 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097  | Next Page >

  • Django - Problem with models/manager to organise a query...

    - by user296644
    Hi, I have an application to count the number of access to an object for each website in a same database. class SimpleHit(models.Model): """ Hit is the hit counter of a given object """ content_type = models.ForeignKey(ContentType) object_id = models.PositiveIntegerField() content_object = generic.GenericForeignKey('content_type', 'object_id') site = models.ForeignKey(Site) hits_total = models.PositiveIntegerField(default=0, blank=True) [...] class SimpleHitManager(models.Manager): def get_query_set(self): print self.model._meta.fields qset = super(SimpleHitManager, self).get_query_set() qset = qset.filter(hits__site=settings.SITE_ID) return qset class SimpleHitBase(models.Model): hits = generic.GenericRelation(SimpleHit) objects = SimpleHitManager() _hits = None def _db_get_hits(self, only=None): if self._hits == None: try: self._hits = self.hits.get(site=settings.SITE_ID) except SimpleHit.DoesNotExist: self._hits = SimpleHit() return self._hits @property def hits_total(self): return self._db_get_hits().hits_total [...] class Meta: abstract = True And I have a model like: class Model(SimpleHitBase): name = models.CharField(max_length=255) url = models.CharField(max_length=255) rss = models.CharField(max_length=255) creation = AutoNowAddDateTimeField() update = AutoNowDateTimeField() So, my problem is this one: when I call Model.objects.all(), I would like to have one request for the SQL (not two). In this case: one for Model in order to have information and one for the hits in order to have the counter (hits_total). This is because I cannot call directly hits.hits_total (due to SITE_ID?). I have tried select_related, but it seems to do not work... Question: - How can I add column automatically like (SELECT hits.hits_total, model.* FROM [...]) to the queryset? - Or use a functional select_related with my models? I want this model could be plugable on all other existing model. Thank you, Best regards.

    Read the article

  • Why extremely occasionally will one of bof/eof be true for a new non-empty recordset

    - by jjb
    set recordsetname = databasename.openrecordset(SQLString) if recordsetname.bof <> true and recordsetname.eof <> true then 'do something end if 2 questions : the above test can evaluate to false incorrectly but only extremely rarely (I've had one lurking in my code and it failed today, I believe for the first time in 5 years of daily use-that's how I found it). Why very occasionally will one of bof/eof be true for a non-empty recordset. It seems so rare that I wonder why it occurs at all. Is this a foolproof replacement: if recordsetname.bof <> true or recordsetname.eof <> true then Edit to add details of code : Customers have orders, each order begins with a BeginOrder item and end with an EndOrder item and in between are the items in the order. The SQL is: ' ids are autoincrement long integers ' SQLString = "select * from Orders where type = OrderBegin or type = OrderEnd" Dim OrderOpen as Boolean OrderOpen = False Set rs = db.Openrecordset(SQLString) If rs.bof <> True And rs.eof <> True Then myrec.movelast If rs.fields("type").value = BeginOrder Then OrderOpen = True End If End If If OrderOpen F False Then 'code here to add new BeginOrder Item to Orders table ' End If ShowOrderHistory 'displays the customer's Order history ' In this case which looks this this BeginOrder Item a Item b ... Item n EndOrder BeginOrder Item a Item b ... Item n EndOrder BeginOrder Item a item b ... Item m BeginOrder <----should not be there as previous order still open

    Read the article

  • Looking for an email/report templating engine with database backend - for end-users ...

    - by RizwanK
    We have a number of customers that we have to send monthly invoices too. Right now, I'm managing a codebase that does SQL queries against our customer database and billing database and places that data into emails - and sends it. I grow weary of maintaining this every time we want to include a new promotion or change our customer service phone numbers. So, I'm looking for a replacement to move more of this into the hands of those requesting the changes. In my ideal world, I need : A WYSIWYG (man, does anyone even say that anymore?) email editor that generates templates based upon the output from a Database Query. The ability to drag and drop various fields from the database query into the email template. Display of sample email results with the database query. Web application, preferably not requiring IIS. Involve as little code as possible for the end-user, but allow basic functionality (i.e. arrays/for loops) Either comes with it's own email delivery engine, or writes output in a way that I can easily write a Python script to deliver the email. Support for generic Database Connectors. (I need MSSQL and MySQL) F/OSS So ... can anyone suggest a project like this, or some tools that'd be useful for rolling my own? (My current alternative idea is using something like ERB or Tenjin, having them write the code, but not having live-preview for the editor would suck...)

    Read the article

  • How can I secure my $_GETs in PHP?

    - by ggfan
    My profile.php displays all the user's postings,comments,pictures. If the user wants to delete, it sends the posting's id to the remove.php so it's like remove.php?action=removeposting&posting_id=2. If they want to remove a picture, it's remove.php?action=removepicture&picture_id=1. Using the get data, I do a query to the database to display the info they want to delete and if they want to delete it, they click "yes". So the data is deleted via $POST NOT $GET to prevent cross-site request forgery. My question is how do I make sure the GETs are not some javascript code, sql injection that will mess me up. here is my remove.php //how do I make $action safe? //should I use mysqli_real_escape_string? //use strip_tags()? $action=trim($_GET['action']); if (($action != 'removeposting') && ($action != 'removefriend') && ($action != 'removecomment')) { echo "please don't change the action. go back and refresh"; header("Location: index.php"); exit(); } if ($action == 'removeposting') { //get the info and display it in a form. if user clicks "yes", deletes } if ($action =='removepicture') { //remove pic } I know I can't be 100% safe, but what are some common defenses I can use. EDIT Do this to prevent xss $action=trim($_GET['action']); htmlspecialchars(strip_tags($action)); Then when I am 'recalling' the data back via POST, I would use $posting_id = mysqli_real_escape_string($dbc, trim($_POST['posting_id']));

    Read the article

  • counter_cache not updating on the model after save

    - by sehnsucht
    I am using a counter_cache to let MySQL do some of the bookkeeping for me: class Container has_many :items end class Item belongs_to :container, :counter_cache => true end Now, if I do this: container = Container.find(57) item = Item.new item.container = container item.save in the SQL log there will be an INSERT followed by something like: UPDATE `containers` SET `items_count` = COALESCE(`items_count`, 0) + 1 WHERE `containers`.`id` = 57 which is what I expected it to do. However, the container[:items_count] will be stale! ...unless I container.reload to pick up the updated value. Which in my mind sort of defeats part of the purpose of using the :counter_cache in favor of a custom built one, especially since I may not actually want a reload before I try to access the items_count attribute. (My models are pretty code-heavy because of the nature of the domain logic, so I sometimes have to save and create multiple things in one controller call.) I understand I can tinker with callbacks myself but this seems to me a fairly basic expectation of the simple feature. Again, if I have to write additional code to make it fully work, it might as well be easier to implement a custom counter. What am I doing/assuming wrong?

    Read the article

  • C# How can I get each column type and length and then use the lenght to padright to get the spaces a

    - by svon
    I have a console application that extracts data from a SQL table to a flat file. How can I get each column type and length and then use the lenght of each column to padright(length) to get the spaces at the end of each field. Here is what I have right now that does not include this functionality. Thanks { var destination = args[0]; var command = string.Format("Select * from {0}", Validator.Check(args[1])); var connectionstring = string.Format("Data Source={0}; Initial Catalog=dbname;Integrated Security=SSPI;", args[2]); var helper = new SqlHelper(command, CommandType.Text, connectionstring); using (StreamWriter writer = new StreamWriter(destination)) using (IDataReader reader = helper.ExecuteReader()) { while (reader.Read()) { Object[] values = new Object[reader.FieldCount]; int fieldCount = reader.GetValues(values); for (int i = 0; i < fieldCount; i++) writer.Write(values[i].ToString().PadRight(513)); writer.WriteLine(); } writer.Close(); }

    Read the article

  • how can I make a "downstream only" copy of a file in TFS

    - by jcollum
    I've got this SQL script that needs to exist in two places in source control. I want to have only one real copy of this file and keep a virtual copy of the file in the other solution. One is needed for a unit test and the other for a development tool. The files, should, by definition, always be the same. If they have differences then there's a problem with our process. In Sourcegear I could make a virtual copy of a specific version of a file and keep it somewhere else in the source tree. That doesn't seem to be possible in TFS. Is it possible in SVN? So what are my options here? Branching/merging -- which is what the TFS team says I should be doing here -- means just another step that I have to remember to do. Plus it isn't automatic and I would prefer that this be automated. Is there some way to run an exe on checkin of a specific file? I'm thinking if I could do that then I could do a checkout-edit-checkin of the downstream copy of the file.

    Read the article

  • Questions about shifting from mysql to PDO

    - by Scarface
    Hey guys I have recently decided to switch all my current plain mysql queries performed with php mysql_query to PDO style queries to improve performance, portability and security. I just have some quick questions for any experts in this database interaction tool Will it prevent injection if all statements are prepared? (I noticed on php.net it wrote 'however, if other portions of the query are being built up with unescaped input, SQL injection is still possible' I was not exactly sure what this meant). Does this just mean that if all variables are run through a prepare function it is safe, and if some are directly inserted then it is not? Currently I have a connection at the top of my page and queries performed during the rest of the page. I took a look at PDO in more detail and noticed that there is a try and catch procedure for every query involving a connection and the closing of that connection. Is there a straightforward way to connecting and then reusing that connection without having to put everything in a try or constantly repeat the procedure by connecting, querying and closing? Can anyone briefly explain in layman's terms what purpose a set_exception_handler serves? I appreciate any advice from any more experienced individuals.

    Read the article

  • What is the best way to structure those jquery call backs?

    - by user518138
    I am new to jquery and recently have been amazed how powerful those call backs are. But here I got some logic and I am not sure what the best way is. Basically, I got a bunch of tables in web sql database in chrome. I will need to go through say table A, table B and table C. I need to submit each row of the table to server. Each row represents a bunch of complicated logic and data url and they have to be submitted in the order of A - B - C. The regular java way would be: TableA.SubmitToServer() { query table A; foreach(row in tableA.rows) { int nRetID = submitToServer(row); do other updates.... } } Similar for tableB and table C. then just call: TableA.SubmitToServer(); TableB.SubmitToServer(); TableC.SubmitToServer(); //that is very clear and easy. But in JQuery, it probably will be: db.Transaction(function (tx){ var strQuery = "select * from TableA"; tx.executeSql(strQuery, [], function (tx, result){ for(i = 0 ; i < result.rows.length; i++) { submitTableARowToServer(tx, result.rows.getItem(i), function (tx, result) { //do some other related updates based on this row from tableA //also need to upload some related files to server... }); } }, function errorcallback....) }); As you can see, there are already enough nested callbacks. Now, where should I put the process for TableB and tableC? They all need to have similar logic and they can only be called after everything is done from TableA. So, What is the best way to do this in jquery? Thanks

    Read the article

  • EventHandlers saved to databases.

    - by Stacey
    In a database application (using Sql Server right now, in C#, with Entity Framework 4.0) I have a situation where I need to trigger events when some values change. For instance assume a class "Trackable". class Trackable { string Name { get; set; } int Positive { get; set; } int Negative { get; set; } int Total { get; set; } // event OnChanged } Trackable is represented in the database as follows; table Trackables Id | guid name | varchar(32) positive | int negative | int Total is of course, calculated at runtime. When a trackable event changes, I want to inspect its previous value, and then see what it is changing to, and be capable of reacting accordingly. However different trackables need to trigger different events (to avoid a huge, massive cascading switch/if block). If this were just only C# code it would be easy - but they have to be saved to the database. I can't divide up each different trackable into a different table/class, that would be silly - they are all identical, but the event raised is different based on how they are made. So I guess my question is, is there any way to store an event handler in a database such that.. Trackable t1 = new Trackable() { Name = "Trackable1" OnChange += TrackableChangedEventHandler(OnTrackable1Change) } Trackable t2 = new Trackable() { Name = "Trackable2", OnChange += TrackableChangedEventHandler(OnTrackable2Change) }

    Read the article

  • Submitting form with Ajax

    - by Sidetracking
    So I'm completely lost on how to submit a form with Ajax. I'm fairly new to Javascript and hopefully I'm not in over my head. When I click submit on my form, nothing happens on my page, not in my SQL database where the info should be stored (double checked process form too). Here's the code if anyone's willing to help Javascript: $(document).ready(function() { $('#form').submit(function() { $.ajax({ url: "../process.php", type: "post", data: $(this).serialize() }); }); } HTML: <form name="contact" method="post" action="" id="form"> <span id="input"> <input type="text" name="first" maxlength="50" size="30" title="First Name" class="textbox"> <input type="text" name="last" maxlength="80" size="30" title="Last Name" class="textbox"> <input type="text" name="email" maxlength="80" size="30" title="Email" class="textbox"> <textarea name="request" maxlength="1000" cols="25" rows="6" title="Request"></textarea> </span> <input type="submit" value="Submit" class="submit"> </form>

    Read the article

  • Not getting concept of null

    - by appu
    Hy Guys, Beginning with mysql. I am not able to grasp the concept of NULL. Check screen-shot (*declare_not_null, link*). In it when I specifically declared 'name' field to be NOT NULL. When i run the 'desc test' table command, the table description shows default value for name field to be NULL.Why is that so? From what I have read about NULL, it connotes a missing or information that is not applicable. So when I declare a field to be NOT NULL it implies (as per my understanding) that user must enter a value for the name field else the DB engine should generate an error i.e. record will not be entered in DB. However when i run 'insert into test value();' the DB engine enters the record in table. Check screen-shot(*empty_value, link*). FLICKR LINKS *declare_not_null* http://www.flickr.com/photos/55097319@N03/5302758813/ *empty_values* Check the second screenshot on flickr Q.2 what would be sql statemetn to drop a primary key from a table's field. If I use 'ALTER TABLE test drop key id;' it gives the following: ERROR: Incorrect table definition; there can be only one auto column and it must be defined as a key. Thanks for your help..

    Read the article

  • How to improve my software project's speed?

    - by Blitzkr1eg
    I'm doing a school software project with my class mates in Java. We store the info on a remote db. When we start the application we pull all the information from the database and transform it into objects to use in our application (using java sql statemens). In the application we edit some of these objects and then when we exit the application we save or update information in the database using Hibernate. As you see we dont use Hibernate for pulling in information, we use it just for saving and updating. We have 2, but very similar problems. The loading of object(when we start the app) and the saving of objects(with Hibernate) in the db(when closing the app) is taking too much time. And our project its not a huge enterprise application, its a quite small app, we just manage some students, teachers, homeworks and tests. So our db is also very very small. How could we increase performance ? later edit: if we use a local database it runs very quick, it just runs slow on remote databases

    Read the article

  • CakePHP pagination with HABTM models

    - by nickf
    I'm having some problems with creating pagination with a HABTM relationship. First, the tables and relationships: requests (id, to_location_id, from_location_id) locations (id, name) items_locations (id, item_id, location_id) items (id, name) So, a Request has a Location the request is coming from and a Location the Request is going to. For this question, I'm only concerned about the "to" location. Request --belongsTo--> Location* --hasAndBelongsToMany--> Item (* as "ToLocation") In my RequestController, I want to paginate all the Items in a Request's ToLocation. // RequestsController var $paginate = array( 'Item' => array( 'limit' => 5, 'contain' => array( "Location" ) ) ); // RequestController::add() $locationId = 21; $items = $this->paginate('Item', array( "Location.id" => $locationId )); And this is failing, because it is generating this SQL: SELECT COUNT(*) AS count FROM items Item WHERE Location.id = 21 I can't figure out how to make it actually use the "contain" argument of $paginate... Any ideas?

    Read the article

  • Is there a way to sync (two way) tables betwen a mysql server and a local MS Access?

    - by Kailen
    Help me figure out a solution to a (not so unique) problem. My research group has gps devices attached to migratory animals. Every once in a while, a research tech will be within range of an animal and will get the chance to download all the logged points. Each individual spits out a single dbf and new locations are just appended to the end (so the file is just cumulative). These data need to be shared among a research group. Everyone else (besides me) wants to use access, so they can make small edits and prefer that interface. They do not like using MySQL. The solution I came up with is: a) The person who downloads the file goes to a web page, enters animal ID into a form, chooses .dbf file and uploads to a mysql database on the server (I still have to write php code to read the dbf and write sql insert statements from it). b) Everyone syncs from their local access database to the server. (This is natively possible from access but very clunky). Is there a tool (preferably open source), that can compare a access table to mysql table and sync the two (both ways)? Alternatively, does anyone have a more elegant solution? The ultimate goal is to allow everyone to have access to the most current data on their computers using their preferred database app.

    Read the article

  • Ruby on Rails: temporarily update an attribute into cache without saving it?

    - by randombits
    I have a bit of code that depicts this hypothetical setup below. A class Foo which contains many Bars. Bar belongs to one and only one Foo. At some point, Foo can do a finite loop that lapses 2+ iterations. In that loop, something like the following happens: bar = Bar.find_where_in_use_is_zero bar.in_use = 1 Basically what find_where_in_use_is_zero does something like this in as far as SQL goes: SELECT * from bars WHERE in_use = 0 Now the problem I'm facing is that I cannot run the following line of code after bar.in_use =1 is invoked: bar.save The reason is clear, I'm still looping and the new Foo hasn't been created, so we don't have a foo_id to put into bars.foo_id. Even if I set to allow foo_id to be NULL, we have a problem where one of the bars can fail validation and the existing one was saved to the database. In my application, that doesn't work. The entire request is atomic, either all succeeds or fails together. What happens next, is that in my loop, I have the potential to select the same exact bar that I did on a previous iteration of the loop since the in_use flag will not be set to 1 until @foo.save is called. Is there anyway to work around this condition and temporarily set the in_use attribute to 1 for subsequent iterations of the loop so that I retrieve an available bar instance?

    Read the article

  • cakephp find('list') - problem using

    - by MOFlint
    I'm trying to get an array to poplulate a Counties select. If I use find('all') I can get the data but this array needs flattening to use in the view with $form-input($counties). If I use find('list') I can't seem to get the right array - which is simple array of county names. What I have tried is this: $ops=array( 'conditions' => array( 'display' => '!=0', 'TO_DAYS(event_finish) >= TO_DAYS(NOW())' ), 'fields' => 'DISTINCT venue_county', 'order' => 'venue_county DESC' ); $this->set('counties', $this->Event->find('list',$ops)); but the SQL this generates is: SELECT Event.id, DISTINCT Event.venue_county FROM events AS Event WHERE display = 1 AND TO_DAYS(event_finish) = TO_DAYS(NOW()) ORDER BY venue_county DESC which generates an error because it first inserts the Event.id field in the query - which is not wanted and causes the error. In my database I have a single table for Events which includes the venue address and I don't really want to create another table for address. What options should I be using efor the find('list') call?

    Read the article

  • Parallel features in .Net 4.0

    - by Jonathan.Peppers
    I have been going over the practicality of some of the new parallel features in .Net 4.0. Say I have code like so: foreach (var item in myEnumerable) myDatabase.Insert(item.ConvertToDatabase()); Imagine myDatabase.Insert is performing some work to insert to a SQL database. Theoretically you could write: Parallel.ForEach(myEnumerable, item => myDatabase.Insert(item.ConvertToDatabase())); And automatically you get code that takes advantage of multiple cores. But what if myEnumerable can only be interacted with by a single thread? Will the Parallel class enumerate by a single thread and only dispatch the result to worker threads in the loop? What if myDatabase can only be interacted with by a single thread? It would certainly not be better to make a database connection per iteration of the loop. Finally, what if my "var item" happens to be a UserControl or something that must be interacted with on the UI thread? What design pattern should I follow to solve these problems? It's looking to me that switching over to Parallel/PLinq/etc is not exactly easy when you are dealing with real-world applications.

    Read the article

  • Rebuilding indexes does not change the fragmentation % for nonclustered indexes.

    - by Noddy
    For starters, I am no DBA and I am working on rebuilding the indexes. I made use of the amazing TSQL script from msdn to alter index based onthe fragmente percent returned by dm_db_index_physical_stats and if the fragment percent is more than 30 then do a REBUILD or do a REORGANISE. What I found out was, in the first iteration, there were 87 records which needed defrag.I ran the script and all the 87 indexes (clustered & nonclustered) were rebuilt or reindexed. When I got the stats from dm_db_index_physical_stats , there were still 27 records which needed defrag and all of theses were NON CLUSTERED Indexes. All the Clustered indexes were fixed. No matter how many times I run the script to defrag these records, I still have the same indexes to be defraged and most of them with the same fragmentation %. Nothing seems to change after this. Note: I did not perform any inserts/ updates/ deletes to the tables during these iterations. Still the Rebuild/reorganise did not result in any change. More information: Using SQL 2008 Script as available in msdn http://msdn.microsoft.com/en-us/library/ms188917.aspx Could you please explain why these 27 records of non clustered indexes are not being changed/ modified ? Any help on this would be highly appreciated. Nod

    Read the article

  • Links to my “Best of 2010” Posts

    - by ScottGu
    I hope everyone is having a Happy New Years! 2010 has been a busy blogging year for me (this is the 100th blog post I’ve done in 2010).  Several people this week suggested I put together a summary post listing/organizing my favorite posts from the year.  Below is a quick listing of some of my favorite posts organized by topic area: VS 2010 and .NET 4 Below is a series of posts I wrote (some in late 2009) about the VS 2010 and .NET 4 (including ASP.NET 4 and WPF 4) release we shipped in April: Visual Studio 2010 and .NET 4 Released Clean Web.Config Files Starter Project Templates Multi-targeting Multiple Monitor Support New Code Focused Web Profile Option HTML / ASP.NET / JavaScript Code Snippets Auto-Start ASP.NET Applications URL Routing with ASP.NET 4 Web Forms Searching and Navigating Code in VS 2010 VS 2010 Code Intellisense Improvements WPF 4 Add Reference Dialog Improvements SEO Improvements with ASP.NET 4 Output Cache Extensibility with ASP.NET 4 Built-in Charting Controls for ASP.NET and Windows Forms Cleaner HTML Markup with ASP.NET 4 - Client IDs Optional Parameters and Named Arguments in C# 4 - and a cool scenarios with ASP.NET MVC 2 Automatic Properties, Collection Initializers and Implicit Line Continuation Support with VB 2010 New <%: %> Syntax for HTML Encoding Output using ASP.NET 4 JavaScript Intellisense Improvements with VS 2010 VS 2010 Debugger Improvements (DataTips, BreakPoints, Import/Export) Box Selection and Multi-line Editing Support with VS 2010 VS 2010 Extension Manager (and the cool new PowerCommands Extension) Pinning Projects and Solutions VS 2010 Web Deployment Debugging Tips/Tricks with Visual Studio Search and Navigation Tips/Tricks with Visual Studio Visual Studio Below are some additional Visual Studio posts I’ve done (not in the first series above) that I thought were nice: Download and Share Visual Studio Color Schemes Visual Studio 2010 Keyboard Shortcuts VS 2010 Productivity Power Tools Fun Visual Studio 2010 Wallpapers Silverlight We shipped Silverlight 4 in April, and announced Silverlight 5 the beginning of December: Silverlight 4 Released Silverlight 4 Tools for VS 2010 and WCF RIA Services Released Silverlight 4 Training Kit Silverlight PivotViewer Now Available Silverlight Questions Announcing Silverlight 5 Silverlight for Windows Phone 7 We shipped Windows Phone 7 this fall and shipped free Visual Studio development tools with great Silverlight and XNA support in September: Windows Phone 7 Developer Tools Released Building a Windows Phone 7 Twitter Application using Silverlight ASP.NET MVC We shipped ASP.NET MVC 2 in March, and started previewing ASP.NET MVC 3 this summer.  ASP.NET MVC 3 will RTM in less than 2 weeks from today: ASP.NET MVC 2: Strongly Typed Html Helpers ASP.NET MVC 2: Model Validation Introducing ASP.NET MVC 3 (Preview 1) Announcing ASP.NET MVC 3 Beta and NuGet (nee NuPack) Announcing ASP.NET MVC 3 Release Candidate 1  Announcing ASP.NET MVC 3 Release Candidate 2 Introducing Razor – A New View Engine for ASP.NET ASP.NET MVC 3: Layouts with Razor ASP.NET MVC 3: New @model keyword in Razor ASP.NET MVC 3: Server-Side Comments with Razor ASP.NET MVC 3: Razor’s @: and <text> syntax ASP.NET MVC 3: Implicit and Explicit code nuggets with Razor ASP.NET MVC 3: Layouts and Sections with Razor IIS and Web Server Stack The IIS and Web Stack teams have made a bunch of great improvements to the core web server this year: Fix Common SEO Problems using the URL Rewrite Extension Introducing the Microsoft Web Farm Framework Automating Deployment with Microsoft Web Deploy Introducing IIS Express SQL CE 4 (New Embedded Database Support with ASP.NET) Introducing Web Matrix EF Code First EF Code First is a really nice new data option that enables a very clean code-oriented data workflow: Announcing Entity Framework Code-First CTP5 Release Class-Level Model Validation with EF Code First and ASP.NET MVC 3 Code-First Development with Entity Framework 4 EF 4 Code First: Custom Database Schema Mapping Using EF Code First with an Existing Database jQuery and AJAX Contributions My team began making some significant source code contributions to the jQuery project this year: jQuery Templates, Data Link and Globalization Accepted as Official jQuery Plugins jQuery Templates and Data Linking (and Microsoft contributing to jQuery) jQuery Globalization Plugin from Microsoft Patches and Hot Fixes Some useful fixes you can download prior to VS 2010 SP1: Patch for Cut/Copy “Insufficient Memory” issue with VS 2010 Patch for VS 2010 Find and Replace Dialog Growing Patch for VS 2010 Scrolling Context Menu Videos of My Talks Some recordings of technical talks I’ve done this year: ASP.NET 4, ASP.NET MVC, and Silverlight 4 Talks I did in Europe VS 2010 and ASP.NET 4 Web Forms Talk in Arizona Other About Technical Debates (and ASP.NET Web Forms and ASP.NET MVC debates in particular) ASP.NET Security Fix Now on Windows Update Upcoming Web Camps I’d like to say a big thank you to everyone who follows my blog – I really appreciate you reading it (the comments you post help encourage me to write it).  See you in the New Year! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • BizTalk 2009 - BizTalk Benchmark Wizard: Installation

    - by StuartBrierley
    As previously detailed, I have completed a single server installation of BizTalk Server 2009 standard on my development laptop; a MacBook Pro Core2Duo running at 2.16Ghz with 2Gb of RAM.  Following this I also posted on my use of the BizTalk Server Best Practices Anaylser and how to configure the BizTalk SQL Server Jobs.  All of which means that I should have some confidence that I have a decent working BizTalk Server 2009 environment, Next I thought that it would be a good idea to try and get some idea of how this setup performs by carrying out some baseline tests that can then be replicated on the test and live servers. The aim of this would be to allow confident predictions to be made of how any solutions developed on a single "server" installation may be expected to perform when deployed to these multi-server BizTalk Server 2009 standard installations. The BizTalk Benchmark Wizard would seem to be the perfect tool for the job. The BizTalk Benchmark Wizard is a ultility that can be used to gain some validation of a BizTalk installation, giving a level of guidance on whether it is performing as might be expected. This utility should be used after BizTalk Server has been installed and before any solutions are deployed to the environment.  This will ensure that you are getting consistent and clean results from the BizTalk Benchmark Wizard. The BizTalk Benchmark Wizard applies load to the BizTalk Server environment under a choice of specific scenarios. During these scenarios performance counter information is collected and assessed against statistics that are appropriate to the BizTalk Server environment: "The executed scenarios may or may not be relative to any realistic scenario, and is only intended for testing. The BizTalk Benchmark Wizard has been developed in relation to the BizTalk Server 2009 Scale Out Testing Study. More information about the study can be found here: http://msdn.microsoft.com/en-us/library/ee377068(BTS.10).aspx" After downloading and installing the wizard you will need set up the Hosts, Instances and Adapter handlers.  This is done by running a script file using the “cscript” detailed below.  To do this you will need to open a command prompt window and navigate to the script folder; assuming the default installation location this would be C:\Program Files\Blogical\BizTalk Benchmark Wizard\Artefacts\BizTalk. In this folder you should find an InstallHosts.vbs file which can be executed using the following parameters: NTGroupName - The name of the Windows NT group. UserName – The name of the user account running the service instances. Password – The password of the user account running the service instances. Receive Host – The name of the server where you want to run the receive host instance.  Send Host - The name of the server where you want to run the sen host instance. Processing Host - The name of the server where you want to run the process host instance. By default the script is set up for 64 bit hosts, so if you are running in 32 bit environment make sure that you change the following line in the script before continuing: from:   objHS.IsHost32BitOnly = False to:    objHS.IsHost32BitOnly = True If you have a single box installation, your script command might look like this: cscript InstallHosts.vbs "BizTalk Application Users" “\MyUser” “MyPassword” “BtsServer1” “BtsServer1” “BtsServer1” If you have a multi server installation, your script command might look like this: cscript InstallHosts.vbs "MyDomain\BizTalk Application Users" “MyDomain\MyUser” “MyPassword” “BtsServer1” “BtsServer2” “BtsServer2” Running this script will create: Three hosts (BBW_RxHost, BBW_TxHost and BBW_PxHost) Three host instances One send and one receive adapter handler for the WCF NetTcp adapter. You will then need to import the BizTalk MSI via the BizTalk Administration Console.  Open the BizTalk Administration Console, point to the “Applications” node and import the BizTalk Benchmark Wizard.msi found in the same folder as the script above. This will create a “BizTalk Benchmark Wizard” application along with all ports and orchestrations needed. To finish the installation you will need to run the BizTalk Benchmark Wizard.msi on all BizTalk servers to add the assemblies to the Global Assembly Cache (GAC). Next I will look at running the BizTalk Benchmark Wizard.

    Read the article

  • MongoDB usage best practices

    - by andresv
    The project I'm working on uses MongoDB for some stuff so I'm creating some documents to help developers speedup the learning curve and also avoid mistakes and help them write clean & reliable code. This is my first version of it, so I'm pretty sure I will be adding more stuff to it, so stay tuned! C# Official driver notes The 10gen official MongoDB driver should always be referenced in projects by using NUGET. Do not manually download and reference assemblies in any project. C# driver quickstart guide: http://www.mongodb.org/display/DOCS/CSharp+Driver+Quickstart Reference links C# Language Center: http://www.mongodb.org/display/DOCS/CSharp+Language+Center MongoDB Server Documentation: http://www.mongodb.org/display/DOCS/Home MongoDB Server Downloads: http://www.mongodb.org/downloads MongoDB client drivers download: http://www.mongodb.org/display/DOCS/Drivers MongoDB Community content: http://www.mongodb.org/display/DOCS/CSharp+Community+Projects Tutorials Tutorial MongoDB con ASP.NET MVC - Ejemplo Práctico (Spanish):http://geeks.ms/blogs/gperez/archive/2011/12/02/tutorial-mongodb-con-asp-net-mvc-ejemplo-pr-225-ctico.aspx MongoDB and C#:http://www.codeproject.com/Articles/87757/MongoDB-and-C C# driver LINQ tutorial:http://www.mongodb.org/display/DOCS/CSharp+Driver+LINQ+Tutorial C# driver reference: http://www.mongodb.org/display/DOCS/CSharp+Driver+Tutorial Safe Mode Connection The C# driver supports two connection modes: safe and unsafe. Safe connection mode (only applies to methods that modify data in a database like Inserts, Deletes and Updates. While the current driver defaults to unsafe mode (safeMode == false) it's recommended to always enable safe mode, and force unsafe mode on specific things we know aren't critical. When safe mode is enabled, the driver internal code calls the MongoDB "getLastError" function to ensure the last operation is completed before returning control the the caller. For more information on using safe mode and their implicancies on performance and data reliability see: http://www.mongodb.org/display/DOCS/getLastError+Command If safe mode is not enabled, all data modification calls to the database are executed asynchronously (fire & forget) without waiting for the result of the operation. This mode could be useful for creating / updating non-critical data like performance counters, usage logging and so on. It's important to know that not using safe mode implies that data loss can occur without any notification to the caller. As with any wait operation, enabling safe mode also implies dealing with timeouts. For more information about C# driver safe mode configuration see: http://www.mongodb.org/display/DOCS/CSharp+getLastError+and+SafeMode The safe mode configuration can be specified at different levels: Connection string: mongodb://hostname/?safe=true Database: when obtaining a database instance using the server.GetDatabase(name, safeMode) method Collection: when obtaining a collection instance using the database.GetCollection(name, safeMode) method Operation: for example, when executing the collection.Insert(document, safeMode) method Some useful SafeMode article: http://stackoverflow.com/questions/4604868/mongodb-c-sharp-safemode-official-driver Exception Handling The driver ensures that an exception will be thrown in case of something going wrong, in case of using safe mode (as said above, when not using safe mode no exception will be thrown no matter what the outcome of the operation is). As explained here https://groups.google.com/forum/?fromgroups#!topic/mongodb-user/mS6jIq5FUiM there is no need to check for any returned value from a driver method inserting data. With updates the situation is similar to any other relational database: if an update command doesn't affect any records, the call will suceed anyway (no exception thrown) and you manually have to check for something like "records affected". For MongoDB, an Update operation will return an instance of the "SafeModeResult" class, and you can verify the "DocumentsAffected" property to ensure the intended document was indeed updated. Note: Please remember that an Update method might return a null instance instead of an "SafeModeResult" instance when safe mode is not enabled. Useful Community Articles Comments about how MongoDB works and how that might affect your application: http://ethangunderson.com/blog/two-reasons-to-not-use-mongodb/ FourSquare using MongoDB had serious scalability problems: http://mashable.com/2010/10/07/mongodb-foursquare/ Is MongoDB a replacement for Memcached? http://www.quora.com/Is-MongoDB-a-good-replacement-for-Memcached/answer/Rick-Branson MongoDB Introduction, shell, when not to use, maintenance, upgrade, backups, memory, sharding, etc: http://www.markus-gattol.name/ws/mongodb.html MongoDB Collection level locking support: https://jira.mongodb.org/browse/SERVER-1240 MongoDB performance tips: http://www.quora.com/MongoDB/What-are-some-best-practices-for-optimal-performance-of-MongoDB-particularly-for-queries-that-involve-multiple-documents Lessons learned migrating from SQL Server to MongoDB: http://www.wireclub.com/development/TqnkQwQ8CxUYTVT90/read MongoDB replication performance: http://benshepheard.blogspot.com.ar/2011/01/mongodb-replication-performance.html

    Read the article

  • OBIEE 11.1.1 - Introduction to OBIEE 11g Full Sample App

    - by user809526
    Isn't it nice to discover OBIEE 11g around a nice "How To" catalog of features? to observe OBI and Essbase relationships at work? to discover TimesTen? The OBIEE 11g Full Sample App (FSA) is a comprehensive collection of examples designed to demonstrate the latest Oracle BIEE 11g capabilities and design best practices: Enhanced visualizations as Geo-spacial maps and interactive dashboards, Action Framework,  BI Publisher, Scorecard and Strategy Management, Mobile style sheets, Semantic layer modeling, Multi-source federation, Integration with products such as Essbase, Oracle OLAP, ODM, TimesTen, ODI and more The FSA is intended to be comprehensive, it is big (see CAVEAT below). The FSA is not an Oracle product, it is a good will free deployment of OBIEE/Essbase designed to exemplify OBIEE features, infrastructure and security around the Fusion Middleware components. Its contents and code are distributed free for demonstrative purposes only. It is neither maintained nor supported by Oracle as a licensed product. The OBIEE Full Sample App is independent of the default Sample App that comes with the OBIEE product. BENEFITS The FSA helps as a demonstrator of OBIEE 11g best practices, a tutorial, an environment "Test & Scrap", a SR bench (regression, conflicts), a tuning bench, a quick ready made POC seed for projects, a security options environment, ... The FSA - Is organized around a catalog of functional features - Has been deployed over 1000 times, it should be stable RELEASE The Full Sample App (V107) is bound to OBIEE 11.1.1.5 and Essbase 11.1.2.1 (November 2011). The FSA release dates are independent of the Product GA date (OBIEE). In early December 2011, a new functional Patch (V110) is released. It is easily applied (in less than 15 mins) on top of OBIEE SampleApp 11.1.1.5 (V107). The patch (V110) includes additional functional examples:        1. Web Catalog Statistics Application: Provides detailed insight into your web catalog content, dormant catalog objects, webcat impact analysis for metadata changes and more        2. Data inflation Scripts: A set of simple SQL procedures to quickly inflate SampleApp Fact and Dimension data to millions of records in a few minutes        3. Public Content Extensions Framework: A patching framework for public examples and contributions leveraging SampleApp        4. Additional report examples (including bridge report, external chart integrations) and bug fixes DISTRIBUTION as VBox image (November 2011) The ready made VBox image is designed to run on Virtual Box. It can be converted to VMware (see another BLOG). 1/ http://www.oracle.com/technetwork/middleware/bi-foundation/obiee-samples-167534.html VBox Image Deployment Guide Sampleapp_v107_GA.ovf - VBox image key file The above http URL provides the user:password for the ftp URLs below. 2/ ftp://user:[email protected]/static/SampleAppV107/ 12 "7-zip" files Sampleapp_v107_GA_7_20.7z.001 -> .012 We recommend 7-zip file manager for unzipping (http://www.7-zip.org/). Select Unzip here option, it will create the contents under a directory named "SampleApp_10722". On Windows, it is important to download and save zip file under the root directory (e.g. C:\ or D:\) because of possible long pathnames. 3/ ftp://user:[email protected]/static/SampleAppV107/Unzipped_Version/ 4 files Sampleapp_v107_GA-disk[1234].vmdk Important note: Check the provided checksums (md5sum). Please do it! DISTRIBUTION as Installation files for existing OBI 11.1.1.5 (November 2011) http://www.oracle.com/technetwork/middleware/bi-foundation/obiee-samples-167534.html Install files Deployment Guide SampleApp_10722_1.zip - 198 MB CAVEAT Many computers have RAM chips problems that keep often silent ... until you manipulate big files. It is strongly advised you run some memory check program eg MEMTEST in GRUB boot manager. Running md5sum repeatedly onto the very same big file must be consistent [same result], else a hardware memory problem is suspected. For Virtual Box, you should most likely enable VT-X (Vanderpool) hardware virtualization in BIOS. A free disk space of 80 GB is required to perform safely the VBox image installation. A Virtual Machine of minimum 6 to 7 GB memory fits the needs of combining OBIEE and Essbase execution.

    Read the article

  • gnome3 and unity bug in 11.10

    - by LinuxNoob
    Yes I am new to Linux but am learning stuff daily as I finally migrated my work systems to Ubuntu 11.10 and they works pretty decent given the fact that some of the application I have to use to do my job are not supported by Linux ... yet. Anyways, I was working with Unity, not even knowing there was a Gnome3 environment until I ran across a post while searching for a fix on another unrelated issue. Lo and behold, I was able after much searching to log in Gnome 3 after installing the required packages. I don't know how many articles I had to read for someone to tell me how to log into Gnome3 or into Unity by selecting the little wheel on the login screen to get the drop down for the different environments ...(I know... I'm such a Noob but I'm sure we all were at one time or another). Again, after using Unity, I was hooked on Linux and decided if I can get it to work with all the Apps I need at work and play a little WOW every now and again, I'm good to go. But again to the point. I logged into Gnome3. No icons but just a cool experience to see what the heck this thing does. I eventually figured out how to use it and I love Gnome 3 but it is as buggy as the C++ programs I first learned to write in college. I don't know if it's because i installed the packages after I was up and running in Unity or what. The issues I have are that it locks the desktop up randomly..just locks it up. Usually when i have a lot of screens opened. Never fails that it does it when i put the mouse up in the top/left corner to swap Apps. Then I can't get to the windows in the back to shut them down. A few times i just have to turn the power off on my PC ( sigh, what a POS I say to myself every time I decide "oh it was an isolated incidence"...try it again..it's so cool and designed the way I would design a desktop .. intuitive...just cool man cool) but it fails me again and again. If I had to guess I would say it's a memory problem..i do have 2gigs but nevertheless it just stops working. The mouse doesn't respond ...eventually the keyboard doesn't respond. Just the Power Off button is the only way out. Never had an issue like this in Unity. Using the same Apps in the same way and I bet the system will crash in 45 minutes to an hour. Usually, I have 7-10 windows opened. I wish I knew enough to get the correct logs, I'm sure it's a bug. I'm not doing anything fancy..just running email, word docs, jave apps, vpn, pidgin, maybe some SQL stuff or communication Apps, etc. Anyways I've decided to wait for another major release to try it again... until then I'm sticking with Unity which is also very cool.

    Read the article

  • How To Clear An Alert - Part 2

    - by werner.de.gruyter
    There were some interesting comments and remarks on the original posting, so I decided to do a follow-up and address some of the issues that got raised... Handling Metric Errors First of all, there is a significant difference between an 'error' and an 'alert'. An 'alert' is the violation of a condition (a threshold) specified for a given metric. That means that the Agent is collecting and gathering the data for the metric, but there is a situation that requires the attention of an administrator. An 'error' on the other hand however, is a failure to collect metric data: The Agent is throwing the error because it cannot determine the value for the metric Whereas the 'alert' guarantees continuity of the metric data, an 'error' signals a big unknown. And the unknown aspect of all this is what makes an error a lot more serious than a regular alert: If you don't know what the current state of affairs is, there could be some serious issues brewing that nobody is aware of... The life-cycle of a Metric Error Clearing a metric error is pretty much the same workflow as a metric 'alert': The Agent signals the error after it failed to execute the metric The error is uploaded to the OMS/repository, where it becomes visible in the Console The error will remain active until the Agent is able to execute the metric successfully. Even though the metric is still getting scheduled and executed on a regular basis, the error will remain outstanding as long as the Agent is not capable of executing the metric correctly Knowing this, the way to fix the metric error should be obvious: Take the 'problem' away, and as soon as the metric is executed again (based on the frequency of the metric), the error will go away. The same tricks used to clear alerts can be used here too: Wait for the next scheduled execution. For those metrics that are executed regularly (like every 15 minutes or so), it's just a matter of waiting those minutes to see the updates. The 'Reevaluate Alert' button can be used to force a re-execution of the metric. In case a metric is executed once a day, this will be a better way to make sure that the underlying problem has been solved. And if it has been, the metric error will be removed, and the regular data points will be uploaded to the repository. And just in case you have to 'force' the issue a little: If you disable and re-enable a metric, it will get re-scheduled. And that means a new metric execution, and an update of the (hopefully) fixed problem. Database server-generated alerts and problem checkers There are various ways the Agent can collect metric data: Via a script or a SQL statement, reading a log file, getting a value from an SNMP OID or listening for SNMP traps or via the DBMS_SERVER_ALERTS mechanism of an Oracle database. For those alert which are generated by the database (like tablespace metrics for 10g and above databases), the Agent just 'waits' for the database to report any new findings. If the Agent has lost the current state of the server-side metrics (due to an incomplete recovery after a disaster, or after an improper use of the 'emctl clearstate' command), the Agent might be still aware of an alert that the database no longer has (or vice versa). The same goes for 'problem checker' alerts: Those metrics that only report data if there is a problem (like the 'invalid objects' metric) will also have a problem if the Agent state has been tampered with (again, the incomplete recovery, and after improper use of 'emctl clearstate' are the two main causes for this). The best way to deal with these kinds of mismatches, is to simple disable and re-enable the metric again: The disabling will clear the state of the metric, and the re-enabling will force a re-execution of the metric, so the new and updated results can get uploaded to the repository. Starting 10gR5, the Agent performs additional checks and verifications after each restart of the Agent and/or each state change of the database (shutdown/startup or failover in case of DataGuard) to catch these kinds of mismatches.

    Read the article

< Previous Page | 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097  | Next Page >