Search Results

Search found 12714 results on 509 pages for 'db schema'.

Page 251/509 | < Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >

  • Play 2.0 javaToDo tutorial doesn't compile

    - by chsn
    I'm trying to follow the Play2.0 JavaToDO tutorial and for some reason it just doesn't want to work. Have looked through stackoverflow and other online resources, but haven't find an answer to this and it's driving me crazy. Attached code of the Application.java package controllers; import models.Task; import play.data.Form; import play.mvc.Controller; import play.mvc.Result; public class Application extends Controller { static Form<Task> taskForm = form(Task.class); public static Result index() { return redirect(routes.Application.tasks()); } public static Result tasks() { return ok( views.html.index.render(Task.all(), taskForm)); } public static Result newTask() { return TODO; } public static Result deleteTask(Long id) { return TODO; } } Attached code of the Task java package models; import java.util.List; import javax.persistence.Entity; import play.data.Form; import play.data.validation.Constraints.Required; import play.db.ebean.Model.Finder; import play.mvc.Result; import controllers.routes; @Entity public class Task { public Long id; @Required public String label; // search public static Finder<Long,Task> find = new Finder( Long.class, Task.class); // display tasks public static List<Task> all() { return find.all(); } // create task public static void create(Task task) { task.create(task); } // delete task public static void delete(Long id) { find.ref(id).delete(id); // find.ref(id).delete(); } // create new task public static Result newTask() { Form<Task> filledForm = taskForm.bindFromRequest(); if(filledForm.hasErrors()) { return badRequest( views.html.index.render(Task.all(), filledForm) ); } else { Task.create(filledForm.get()); return redirect(routes.Application.tasks()); } } } I get a compile error on Task.java on the line static Form<Task> taskForm = form(Task.class); As I'm working on eclipse (the project is eclipsified before import), it's telling me that taskForm cannot be resolved and it also underlines every play 2 command e.g. "render(), redirect(), bindFromRequest()" asking me to create a method for it. Any ideas how to solve the compilations error and also how to get Eclipse to recognize the play2 commands? EDIT: updated Application.java package controllers; import models.Task; import play.data.Form; import play.mvc.Controller; import play.mvc.Result; public class Application extends Controller { // create new task public static Result newTask() { Form<Task> filledForm = form(Task.class).bindFromRequest(); if(filledForm.hasErrors()) { return badRequest( views.html.index.render(Task.all(), filledForm) ); } else { Task.newTask(filledForm.get()); return redirect(routes.Application.tasks()); } } public static Result index() { return redirect(routes.Application.tasks()); } public static Result tasks() { return ok( views.html.index.render(Task.all(), taskForm)); } public static Result deleteTask(Long id) { return TODO; } } Updated task.java package models; import java.util.List; import javax.persistence.Entity; import play.data.Form; import play.data.validation.Constraints.Required; import play.db.ebean.Model; import play.db.ebean.Model.Finder; import play.mvc.Result; import controllers.routes; @Entity public class Task extends Model { public Long id; @Required public String label; // Define a taskForm static Form<Task> taskForm = form(Task.class); // search public static Finder<Long,Task> find = new Finder( Long.class, Task.class); // display tasks public static List<Task> all() { return find.all(); } // create new task public static Result newTask(Task newTask) { save(task); } // delete task public static void delete(Long id) { find.ref(id).delete(id); // find.ref(id).delete(); } }

    Read the article

  • MVC DropDownListFor not populating the selected value

    - by user2254436
    I'm really having troubles with MVC, in another project I've done the same thing and it worked fine but in this project I just don't understand why the selected item in the dropdown is not populating the class correctly with EF. I have 2 classes: public partial class License { public License() { this.Customers = new HashSet<Customer>(); } public int LicenseID { get; set; } public int Lic_LicenseTypeID { get; set; } public int Lic_LicenseStatusID { get; set; } public string Lic_LicenseComments { get; set; } public virtual EntitiesList LicenseStatus { get; set; } public virtual EntitiesList LicenseType { get; set; } } public partial class EntitiesList { public EntitiesList() { this.LicensesStatus = new HashSet<License>(); this.LicensesType = new HashSet<License>(); } public int ListID { get; set; } public string List_EntityValue { get; set; } public string List_Comments { get; set; } public string List_EntityName { get; set; } public virtual ICollection<License> LicensesStatus { get; set; } public virtual ICollection<License> LicensesType { get; set; } public string List_DisplayName { get { return Regex.Replace(List_EntityName, "([a-z])([A-Z])", "$1 $2"); ; } } public string List_DisplayValue { get { return Regex.Replace(List_EntityValue, "([a-z])([A-Z])", "$1 $2"); } } } The EntitiesList is table in db that have all my "enum" lists. For example: ListID - 0 List_EntityValue - Activate List_EntityName - LicenseStatus ListID - 1 List_EntityValue - Basic List_EntityName - LicenseType This is my model: public class LicenseModel { public License License { get; set; } public SelectList LicenseStatuses { get; set; } public int SelectedStatus { get; set; } public SelectList LicenseTypes { get; set; } public int SelectedType { get; set; } } My controller for create: public ActionResult Create() { LicenseModel model = new LicenseModel(); model.License = new License(); model.LicenseStatuses = new SelectList(managerLists.GetAllLicenseStatuses(), "ListID", "List_DisplayValue"); model.LicenseTypes = new SelectList(managerLists.GetAllLicenseTypes(), "ListID", "List_DisplayValue"); return View(model); } [HttpPost] [ValidateAntiForgeryToken] public ActionResult Create(LicenseModel model) { if (ModelState.IsValid) { model.License.Lic_LicenseTypeID = model.SelectedType; model.License.Lic_LicenseStatusID = model.SelectedStatus; managerLicense.AddNewObject(model.License); return RedirectToAction("Index"); } return View(model); } managerLists and managerLicense are the managers that connect between the entities in db and the MVC UI, nothing special... they contains queries for adding new objects, getting the lists, editing and so on. And the view for creating the License: @using (Html.BeginForm()) { @Html.AntiForgeryToken() @Html.ValidationSummary(true) <fieldset> <legend>License</legend> <div class="form-group"> @Html.LabelFor(model => model.License.Lic_LicenseTypeID) @Html.DropDownListFor(model => model.SelectedType, Model.LicenseTypes, new { @class = "form-control" }) <p class="help-block">@Html.ValidationMessageFor(model => model.License.Lic_LicenseTypeID)</p> </div> <div class="form-group"> @Html.LabelFor(model => model.License.Lic_LicenseStatusID) @Html.DropDownListFor(model => model.SelectedStatus, Model.LicenseStatuses, new { @class = "form-control" }) <p class="help-block">@Html.ValidationMessageFor(model => model.License.Lic_LicenseStatusID)</p> </div> <div class="form-group"> @Html.LabelFor(model => model.License.Lic_LicenseComments) @Html.TextAreaFor(model => model.License.Lic_LicenseComments, new { @class = "form-control", rows = "3" }) <p class="help-block">@Html.ValidationMessageFor(model => model.License.Lic_LicenseComments)</p> </div> <p> <input type="submit" value="Create" /> </p> </fieldset> } Now, when I'm trying to save the new license, when it gets to the db.SaveChanges() in the manager I'm getting: "Validation failed for one or more entities. See 'EntityValidationErrors' property for more details." In breakpoint, the Lic_LicenseTypeID and Lic_LicenseStatusID are getting correctly the ID's from the selected item in the dropdown but the LicenseStatus and LicenseStatus properties are null. What an I missing?

    Read the article

  • Image/"most resembling pixel" search optimization?

    - by SigTerm
    The situation: Let's say I have an image A, say, 512x512 pixels, and image B, 5x5 or 7x7 pixels. Both images are 24bit rgb, and B have 1bit alpha mask (so each pixel is either completely transparent or completely solid). I need to find within image A a pixel which (with its' neighbors) most closely resembles image B, OR the pixel that probably most closely resembles image B. Resemblance is calculated as "distance" which is sum of "distances" between non-transparent B's pixels and A's pixels divided by number of non-transparent B's pixels. Here is a sample SDL code for explanation: struct Pixel{ unsigned char b, g, r, a; }; void fillPixel(int x, int y, SDL_Surface* dst, SDL_Surface* src, int dstMaskX, int dstMaskY){ Pixel& dstPix = *((Pixel*)((char*)(dst->pixels) + sizeof(Pixel)*x + dst->pitch*y)); int xMin = x + texWidth - searchWidth; int xMax = xMin + searchWidth*2; int yMin = y + texHeight - searchHeight; int yMax = yMin + searchHeight*2; int numFilled = 0; for (int curY = yMin; curY < yMax; curY++) for (int curX = xMin; curX < xMax; curX++){ Pixel& cur = *((Pixel*)((char*)(dst->pixels) + sizeof(Pixel)*(curX & texMaskX) + dst->pitch*(curY & texMaskY))); if (cur.a != 0) numFilled++; } if (numFilled == 0){ int srcX = rand() % src->w; int srcY = rand() % src->h; dstPix = *((Pixel*)((char*)(src->pixels) + sizeof(Pixel)*srcX + src->pitch*srcY)); dstPix.a = 0xFF; return; } int storedSrcX = rand() % src->w; int storedSrcY = rand() % src->h; float lastDifference = 3.40282347e+37F; //unsigned char mask = for (int srcY = searchHeight; srcY < (src->h - searchHeight); srcY++) for (int srcX = searchWidth; srcX < (src->w - searchWidth); srcX++){ float curDifference = 0; int numPixels = 0; for (int tmpY = -searchHeight; tmpY < searchHeight; tmpY++) for(int tmpX = -searchWidth; tmpX < searchWidth; tmpX++){ Pixel& tmpSrc = *((Pixel*)((char*)(src->pixels) + sizeof(Pixel)*(srcX+tmpX) + src->pitch*(srcY+tmpY))); Pixel& tmpDst = *((Pixel*)((char*)(dst->pixels) + sizeof(Pixel)*((x + dst->w + tmpX) & dstMaskX) + dst->pitch*((y + dst->h + tmpY) & dstMaskY))); if (tmpDst.a){ numPixels++; int dr = tmpSrc.r - tmpDst.r; int dg = tmpSrc.g - tmpDst.g; int db = tmpSrc.g - tmpDst.g; curDifference += dr*dr + dg*dg + db*db; } } if (numPixels) curDifference /= (float)numPixels; if (curDifference < lastDifference){ lastDifference = curDifference; storedSrcX = srcX; storedSrcY = srcY; } } dstPix = *((Pixel*)((char*)(src->pixels) + sizeof(Pixel)*storedSrcX + src->pitch*storedSrcY)); dstPix.a = 0xFF; } This thing is supposed to be used for texture generation. Now, the question: The easiest way to do this is brute force search (which is used in example routine). But it is slow - even using GPU acceleration and dual core cpu won't make it much faster. It looks like I can't use modified binary search because of B's mask. So, how can I find desired pixel faster? Additional Info: It is allowed to use 2 cores, GPU acceleration, CUDA, and 1.5..2 gigabytes of RAM for the task. I would prefer to avoid some kind of lengthy preprocessing phase that will take 30 minutes to finish. Ideas?

    Read the article

  • Do I need to manually create indexes for a DBIx::Class belongs_to relationship

    - by Dancrumb
    I'm using the DBIx::Class modules for an ORM approach to an application I have. I'm having some problems with my relationships. I have the following package MySchema::Result::ClusterIP; use strict; use warnings; use base qw/DBIx::Class::Core/; our $VERSION = '1.0'; __PACKAGE__->load_components(qw/InflateColumn::Object::Enum Core/); __PACKAGE__->table('cluster_ip'); __PACKAGE__->add_columns( # Columns here ); __PACKAGE__->set_primary_key('objkey'); __PACKAGE__->belongs_to( 'configuration' => 'MySchema::Result::Configuration', 'config_key'); __PACKAGE__->belongs_to( 'cluster' => 'MySchema::Result::Cluster', { 'foreign.config_key' => 'self.config_key', 'foreign.id' => 'self.cluster_id' } ); As well as package MySchema::Result::Cluster; use strict; use warnings; use base qw/DBIx::Class::Core/; our $VERSION = '1.0'; __PACKAGE__->load_components(qw/InflateColumn::Object::Enum Core/); __PACKAGE__->table('cluster'); __PACKAGE__->add_columns( # Columns here ); __PACKAGE__->set_primary_key('objkey'); __PACKAGE__->belongs_to( 'configuration' => 'MySchema::Result::Configuration', 'config_key'); __PACKAGE__->has_many('cluster_ip' => 'MySchema::Result::ClusterIP', { 'foreign.config_key' => 'self.config_key', 'foreign.cluster_id' => 'self.id' }); There are a couple of other modules, but I don't believe that they are relevant. When I attempt to deploy this schema, I get the following error: DBIx::Class::Schema::deploy(): DBI Exception: DBD::mysql::db do failed: Can't create table 'test.cluster_ip' (errno: 150) [ for Statement "CREATE TABLE `cluster_ip` ( `objkey` smallint(5) unsigned NOT NULL auto_increment, `config_key` smallint(5) unsigned NOT NULL, `cluster_id` char(16) NOT NULL, INDEX `cluster_ip_idx_config_key_cluster_id` (`config_key`, `cluster_id`), INDEX `cluster_ip_idx_config_key` (`config_key`), PRIMARY KEY (`objkey`), CONSTRAINT `cluster_ip_fk_config_key_cluster_id` FOREIGN KEY (`config_key`, `cluster_id`) REFERENCES `cluster` (`config_key`, `id`) ON DELETE CASCADE ON UPDATE CASCADE, CONSTRAINT `cluster_ip_fk_config_key` FOREIGN KEY (`config_key`) REFERENCES `configuration` (`config_key`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=InnoDB"] at test_deploy.pl line 18 (running "CREATE TABLE `cluster_ip` ( `objkey` smallint(5) unsigned NOT NULL auto_increment, `config_key` smallint(5) unsigned NOT NULL, `cluster_id` char(16) NOT NULL, INDEX `cluster_ip_idx_config_key_cluster_id` (`config_key`, `cluster_id`), INDEX `cluster_ip_idx_config_key` (`config_key`), PRIMARY KEY (`objkey`), CONSTRAINT `cluster_ip_fk_config_key_cluster_id` FOREIGN KEY (`config_key`, `cluster_id`) REFERENC ES `cluster` (`config_key`, `id`) ON DELETE CASCADE ON UPDATE CASCADE, CONSTRAINT `cluster_ip_fk_config_key` FOREIGN KEY (`config_key`) REFERENCES `configuration` (`conf ig_key`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=InnoDB") at test_deploy.pl line 18 From what I can tell, MySQL is complaining about the FOREIGN KEY constraint, in particular, the REFERENCE to (config_key, id) in the cluster table. From my reading of the MySQL documentation, this seems like a reasonable complaint, especially in regards to the third bullet point on this doc page. Here's my question. Am I missing something in the DBIx::Class module? I realize that I could explicitly create the necessary index to match up with this foreign key constraint, but that seems to be repetitive work. Is there something I should be doing to make this occur implicitly?

    Read the article

  • Pagination, next page doesn`t display

    - by user1738013
    if I click page 2 that`s error: Not Found The requested URL /rank/GetAll/30 was not found on this server. My link is: http://localhost/rank/GetAll/30 Model: Rank_Model <?php Class Rank_Model extends CI_Model { public function __construct() { parent::__construct(); } public function record_count() { return $this->db->count_all("ranking"); } public function fifa_rank($limit, $start) { $this->db->limit($limit, $start); $query = $this->db->get("ranking"); if ($query->num_rows() > 0) { foreach ($query->result() as $row) { $data[] = $row; } return $data; } return false; } } ?> Controller: Rank <?php if ( ! defined('BASEPATH')) exit('No direct script access allowed'); class rank extends CI_Controller { function __construct() { parent::__construct(); $this->load->helper("url"); $this->load->helper(array('form', 'url')); $this->load->model('Rank_Model','',TRUE); $this->load->library("pagination"); } function GetAll() { $config = array(); $config["base_url"] = base_url() . "rank/GetAll"; $config["total_rows"] = $this->Rank_Model->record_count(); $config["per_page"] = 30; $config["uri_segment"] = 3; $this->pagination->initialize($config); $page = ($this->uri->segment(3)) ? $this->uri->segment(3) : 0; $data["results"] = $this->Rank_Model->fifa_rank($config["per_page"], $page); $data['errors_login'] = array(); $data["links"] = $this->pagination->create_links(); $this->load->view('left_column/open_fifa_rank',$data); } } View Open: open_fifa_rank <?php $this->load->view('mains/header'); $this->load->view('login/loggin'); $this->load->view('mains/menu'); $this->load->view('left_column/left_column_before'); $this->load->view('left_column/menu_left'); $this->load->view('left_column/left_column'); $this->load->view('center/center_column_before'); $this->load->view('left_column/fifa_rank'); $this->load->view('center/center_column'); $this->load->view('right_column/right_column_before'); $this->load->view('login/zaloguj'); $this->load->view('right_column/right_column'); $this->load->view('mains/footer'); ?> and View: fifa_rank <table> <thead> <tr> <td>Pozycja</td> <td>Kraj</td> <td>Punkty</td> <td>Zmiana</td> </tr> </thead> <?php foreach($results as $data) {?&gt; <tbody> <tr> <td><?php print $data->pozycja;?></td> <td><?php print $data->kraj;?></td> <td><?php print $data->punkty;?></td> <td><?php print $data->zmiana;?></td> </tr> <?php } ?> </tbody> </table> <p><?php echo $links; ?></p> Maybe you know where is my problem? Now I know where is my problem. In first page I have link: http://localhost/index.php/rank/GetAll But on the next: http://localhost/rank/GetAll/30 In secend link, I don`t have index.php. How can I fix it?

    Read the article

  • Explained: EF 6 and “Could not determine storage version; a valid storage connection or a version hint is required.”

    - by Ken Cox [MVP]
    I have a legacy ASP.NET 3.5 web site that I’ve upgraded to a .NET 4 web application. At the same time, I upgraded to Entity Framework 6. Suddenly one of the pages returned the following error: [ArgumentException: Could not determine storage version; a valid storage connection or a version hint is required.]    System.Data.SqlClient.SqlVersionUtils.GetSqlVersion(String versionHint) +11372412    System.Data.SqlClient.SqlProviderServices.GetDbProviderManifest(String versionHint) +91    System.Data.Common.DbProviderServices.GetProviderManifest(String manifestToken) +92 [ProviderIncompatibleException: The provider did not return a ProviderManifest instance.]    System.Data.Common.DbProviderServices.GetProviderManifest(String manifestToken) +11431433    System.Data.Metadata.Edm.Loader.InitializeProviderManifest(Action`3 addError) +11370982    System.Data.EntityModel.SchemaObjectModel.Schema.HandleAttribute(XmlReader reader) +216 A search of the error message didn’t turn up anything helpful except that someone mentioned that the error messages was bogus in his case. The page in question uses the ASP.NET EntityDataSource control, consumed by a Telerik RadGrid. This is a fabulous combination for putting a huge amount of functionality on a page in a very short time. Unfortunately, the 6.0.1 release of EF6 doesn’t support EntityDataSource. According to the people in charge, support is planned but there’s no timeline for an EntityDataSource build that works with EF6.  I’m not sure what to do in the meantime. Should I back out EF6 or manually wire up the RadGrid? The upshot is that you might want to rethink plans to upgrade to Entity Framework 6 for Web forms projects if they rely on that handy control. It might also help to spend a User voice vote here:  http://data.uservoice.com/forums/72025-entity-framework-feature-suggestions/suggestions/3702890-support-for-asp-net-entitydatasource-and-dynamicda

    Read the article

  • Unable to update the EntitySet because it has a DefiningQuery and no &lt;UpdateFunction&gt; element

    - by Harish Ranganathan
    When working with ADO.NET Entity Data Model, its often common that we generate entity schema for more than a single table from our Database.  With Entity Model generation automated with Visual Studio support, it becomes even tempting to create and work entity models to achieve an object mapping relationship. One of the errors that you might hit while trying to update an entity set either programmatically using context.SaveChanges or while using the automatic insert/update code generated by GridView etc., is “Unable to update the EntitySet <EntityName> because it has a DefiningQuery and no <UpdateFunction> element exists in the <ModificationFunctionMapping> element to support the current operation” While the description is pretty lengthy, the immediate thing that would come to our mind is to open our the entity model generated code and see if you can update it accordingly. However, the first thing to check if that, if the Entity Set is generated from a table, whether the Table defines a primary key.  Most of the times, we create tables with primary keys.  But some reference tables and tables which don’t have a primary key cannot be updated using the context of Entity and hence it would throw this error.  Unless it is a View, in which case, the default model is read-only, most of the times the above error occurs since there is no primary key defined in the table. There are other reasons why this error could popup which I am not going into for the sake of simplicity of the post.  If you find something new, please feel free to share it in comments. Hope this helps. Cheers !!!

    Read the article

  • SQL SERVER – Generate Report for Index Physical Statistics – SSMS

    - by pinaldave
    Few days ago, I wrote about SQL SERVER – Out of the Box – Activity and Performance Reports from SSSMS (Link). A user asked me a question regarding if we can use similar reports to get the detail about Indexes. Yes, it is possible to do the same. There are similar type of reports are available at Database level, just like those available at the Server Instance level. You can right click on Database name and click Reports. Under Standard Reports, you will find following reports. Disk Usage Disk Usage by Top Tables Disk Usage by Table Disk Usage by Partition Backup and Restore Events All Transactions All Blocking Transactions Top Transactions by Age Top Transactions by Blocked Transactions Count Top Transactions by Locks Count Resource Locking Statistics by Objects Object Execute Statistics Database Consistency history Index Usage Statistics Index Physical Statistics Schema Change history User Statistics Select the Reports with name Index Physical Statistics. Once click, a report containing all the index names along with other information related to index will be visible, e.g. Index Type and number of partitions. One column that caught my interest was Operation Recommended. In some place, it suggested that index needs to be rebuilt. It is also possible to click and expand the column of partitions and see additional details about index as well. DBA and Developers who just want to have idea about how your index is and its physical statistics can use this tool. Click to Enlarge Note: Please note that I will rebuild my indexes just because this report is recommending it. There are many other parameters you need to consider before rebuilding indexes. However, this tool gives you the accurate stats of your index and it can be right away exported to Excel or PDF writing by clicking on the report. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • Using design-patterns to transform web-service model classes into local model classes and vise versa

    - by Daniil Petrov
    There is a web-application built with play framework 1.2.7. It contains less than 10 model classes. The main purpose of the application is a lightweight access to a complex remote application (more than 50 model classes). The remote application has its own SOAP API and we use it for synchronization of data. There is a scheduled job in the web-app which makes requests to the remote app. It gets bunches of objects from the remote model and populates corresponding objects of the local model. Currently, there are two groups of classes - the local model and the remote model (generated from wsdl schema). It is not allowed to make any modifications to the remote model. Transformations are being made in the scheduled job class. When it gets objects from the remote app it creates local objects. Recently, it was decided to add a possibility to modify the remote objects. It requires more transformations on our side. We need to transform from remote to local model when reading objects and from local to remote when changing objects. I wonder if this would be possible to use some design-patterns to reduce a number of transformations?

    Read the article

  • SQL SERVER – Color Coding SQL Server Management Studio Status Bar – SQL in Sixty Seconds #023 – Video

    - by pinaldave
    I often see developers executing the unplanned code on production server when they actually want to execute on the development server. Developers and DBAs get confused because when they use SQL Server Management Studio (SSMS) they forget to pay attention to the server they are connecting. It is very easy to fix this problem. You can select different color for a different server. Once you have different color for different server in the status bar, it will be easier for developer easily notice the server against which they are about to execute the script. Personally when I work on SQL Server development, here is the color code, which I follow. I keep Green for my development server, blue for my staging server and red for my production server. Honestly color coding does not signify much but different color for different server is the key here. More Tips on SSMS in SQL in Sixty Seconds: Generate Script for Schema and Data in SQL Server – SQL in Sixty Seconds #021  Remove Debug Button in SQL Server Management Studio – SQL in Sixty Seconds #020  Three Tricks to Comment T-SQL in SQL Server Management Studio – SQL in Sixty Seconds #019  Importing CSV into SQL Server – SQL in Sixty Seconds #018   Tricks to Replace SELECT * with Column Names – SQL in Sixty Seconds #017 I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • List of resources for database continuous integration

    - by David Atkinson
    Because there is so little information on database continuous integration out in the wild, I've taken it upon myself to aggregate as much as possible and post the links to this blog. Because it's my area of expertise, this will focus on SQL Server and Red Gate tooling, although I am keen to include any quality articles that discuss the topic in general terms. Please let me know if you find a resource that I haven't listed! General database Continuous Integration · What is Database Continuous Integration? (David Atkinson) · Continuous Integration for SQL Server Databases (Troy Hunt) · Installing NAnt to drive database continuous integration (David Atkinson) · Continuous Integration Tip #3 - Version your Databases as part of your automated build (Doug Rathbone) · How the "migrations" approach makes database continuous integration possible (David Atkinson) · Continuous Integration for the Database (Keith Bloom) Setting up Continuous Integration with Red Gate tools · Continuous integration for databases using Red Gate tools - A technical overview (White Paper, Roger Hart and David Atkinson) · Continuous integration for databases using Red Gate SQL tools (Product pages) · Database continuous integration step by step (David Atkinson) · Database Continuous Integration with Red Gate Tools (video, David Atkinson) · Database schema synchronisation with RedGate (Vincent Brouillet) · Database continuous integration and deployment with Red Gate tools (David Duffett) · Automated database releases with TeamCity and Red Gate (Troy Hunt) · How to build a database from source control (David Atkinson) · Continuous Integration Automated Database Update Process (Lance Lyons) Other · Evolutionary Database Design (Martin Fowler) · Recipes for Continuous Database Integration: Evolutionary Database Development (book, Pramod J Sadalage) · Recipes for Continuous Database Integration (book, Pramod Sadalage) · The Red Gate Guide to SQL Server Team-based Development (book, Phil Factor, Grant Fritchey, Alex Kuznetsov, Mladen Prajdic) · Using SQL Test Database Unit Testing with TeamCity Continuous Integration (Dave Green) · Continuous Database Integration (covers MySQL, Perason Education) Technorati Tags: SQL Server,Continous Integration

    Read the article

  • Using Oracle Data in the Business Rules Engine

    - by Christopher House
    Yesterday I started working on some new functionality that I had planned to implement using the Business Rules Engine.  As I got further into it, I realized that some of my rules were going to need to reference some data that resides in an Oracle database.  I knew the Business Rules Composer supports using DataConnections and TypedDataTables, but I’d never used this functionality myself, so I wasn’t so sure how it would work with Oracle.  As it turns out, it’s very do-able, there’s just little hoop you need to jump through. I fired up BRC and my suspicions were quickly confirmed.  BRC only recognizes SQL Server databases when it comes to editing rules.  Not letting that deter me, I decided to see if I could “trick” BRE into using Oracle data. On my local SQL server, I created a new database and in that database, created a table that matched the schema of the table I wanted to use in the Oracle database.  I then set about creating my rules, referencing the new SQL Server database everywhere I wanted to use Oracle data.  Finally, I created a new class library and added a class that implements Microsoft.RuleEngine.IFactRetriever.  In that class, I added the necessary code to get a DataSet from the Oracle server, wrap it in a TypedDataTable and assert it into the rule engine.  It’s worth pointing out that in my IFactRetriever class, I made sure to set my DataSet name to the name of the database I’d referenced in the BRC and the DataTable’s name to the name of the table that I’d referenced in the BRC. After gac’ing the new class library and deploying my policy, I tested and everything worked as expected.

    Read the article

  • SQL SERVER – sp_describe_first_result_set New System Stored Procedure in SQL Server 2012

    - by pinaldave
    I might have said this earlier many times but I will say it again – SQL Server never stops to amaze me. Here is the example of it sp_describe_first_result_set. I stumbled upon it when I was looking for something else on BOL. This new system stored procedure did attract me to experiment with it. This SP does exactly what its names suggests – describes the first result set. Let us see very simple example of the same. Please note that this will work on only SQL Server 2012. EXEC sp_describe_first_result_set N'SELECT * FROM AdventureWorks.Sales.SalesOrderDetail', NULL, 1 GO Here is the partial resultset. Now let us take this simple example to next level and learn one more interesting detail about this function. First I will be creating a view and then we will use the same procedure over the view. USE AdventureWorks GO CREATE VIEW dbo.MyView AS SELECT [SalesOrderID] soi_v ,[SalesOrderDetailID] sodi_v ,[CarrierTrackingNumber] stn_v FROM [Sales].[SalesOrderDetail] GO Now let us execute above stored procedure with various options. You can notice I am changing the very last parameter which I am passing to the stored procedure.This option is known as for browse_information_mode. EXEC sp_describe_first_result_set N'SELECT soi_v soi, sodi_v sodi, stn_v stn FROM MyView', NULL, 0; GO EXEC sp_describe_first_result_set N'SELECT soi_v soi, sodi_v sodi, stn_v stn FROM MyView', NULL, 1; GO EXEC sp_describe_first_result_set N'SELECT soi_v soi, sodi_v sodi, stn_v stn FROM MyView', NULL, 2; GO Here is result of all the three queries together in single image for easier understanding regarding their difference. You can see that when BrowseMode is set to 1 the resultset describes the details of the original source database, schema as well source table. When BrowseMode is set to 2 the resulset describes the details of the view as the source database. I found it really really interesting that there exists system stored procedure which now describes the resultset of the output. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Stored Procedure, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Windows Phone 7 ActiveSync error 86000C09 (My First Post!)

    - by Chris Heacock
    Hello fellow geeks! I'm kicking off this new blog with an issue that was a real nuisance, but was relatively easy to fix. During a recent Exchange 2003 to 2010 migration, one of the users was getting an error on his Windows Phone 7 device. The error code that popped up on the phone on every sync attempt was 86000C09 We tested the following: Different user on the same device: WORKED Problem user on a different device: FAILED   Seemed to point (conclusively) at the user's account as the crux of the issue. This error can come up if a user has too many devices syncing, but he had no other phones. We verified that using the following command: Get-ActiveSyncDeviceStatistics -Identity USERID Turns out, it was the old familiar inheritable permissions issue in Active Directory. :-/ This user was not an admin, nor had he ever been one. HOWEVER, his account was cloned from an ex-admin user, so the unchecked box stayed unchecked. We checked the box and voila, data started flowing to his device(s). Here's a refresher on enabling Inheritable permissions: Open ADUC, and enable Advanced Features: Then open properties and go to the Security tab for the user in question: Click on Advanced, and the following screen should pop up: Verify that "Include inheritable permissions from this object's parent" is *checked*.   You will notice that for certain users, this box keeps getting unchecked. This is normal behavior due to the inbuilt security of Active Directory. People that are in the following groups will have this flag altered by AD: Account Operators Administrators Backup Operators Domain Admins Domain Controllers Enterprise Admins Print Operators Read-Only Domain Controllers Replicator Schema Admins Server Operators Once the box is cheked, permissions will flow and the user will be set correctly. Even if the box is unchecked, they will function normally as they now has the proper permissions configured. You need to perform this same excercise when enabling users for Lync, but that's another blog. :-)   -Chris

    Read the article

  • REST: How to store and reuse REST call queries

    - by Jason Holland
    I'm learning C# by programming a real monstrosity of an application for personal use. Part of my application uses several SPARQL queries like so: const string ArtistByRdfsLabel = @" PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> SELECT DISTINCT ?artist WHERE {{ {{ ?artist rdf:type <http://dbpedia.org/ontology/MusicalArtist> . ?artist rdfs:label ?rdfsLabel . }} UNION {{ ?artist rdf:type <http://dbpedia.org/ontology/Band> . ?artist rdfs:label ?rdfsLabel . }} FILTER ( str(?rdfsLabel) = '{0}' ) }}"; string Query = String.Format(ArtistByRdfsLabel, Artist); I don't like the idea of keeping all these queries in the same class that I'm using them in so I thought I would just move them into their own dedicated class to remove clutter in my RestClient class. I'm used to working with SQL Server and just wrapping every query in a stored procedure but since this is not SQL Server I'm scratching my head on what would be the best for these SPARQL queries. Are there any better approaches to storing these queries using any special C# language features (or general, non C# specific, approaches) that I may not already know about? EDIT: Really, these SPARQL queries aren't anything special. Just blobs of text that I later want to grab, insert some parameters into via String.Format and send in a REST call. I suppose you could think of them the same as any SQL query that is kept in the application layer, I just never practiced keeping SQL queries in the application layer so I'm wondering if there are any "standard" practices with this type of thing.

    Read the article

  • Issue with gpg agent in Ubuntu 12.04 after installing gnome3 shell

    - by Jeroen
    I just did a fresh install of Ubuntu 12.04. Initially things were working. But after I installed some software, the 'gpg agent' is unresponsive. I suspect it has something to do with upgrades that I downloaded from the gnome 3 ppa. When I try to sign a package, it terminates with: gpg: problem with the agent - disabling agent use debsign: gpg error occurred! Aborting.... debuild: fatal error at line 1271: running debsign failed The GPG gui tool (called "Passwords and Keys" or seahorse) isn't starting anymore either. When I click it, it tries to start and then gives up and dies after a couple of seconds. I am not sure where to look for log files of gpg agent. The only thing that I see in /var/log is in auth.log that says: May 1 20:04:14 jeroen-ubuntu gnome-keyring-daemon[1997]: couldn't create prompt for gnupg passphrase: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.gnome.keyring.SystemPrompter was not provided by any .service files Not sure if it is related, but when I try to start seahorse from the command line, I get: jeroen@jeroen-ubuntu:~$ seahorse (seahorse:4828): GLib-GIO-ERROR **: Settings schema 'org.gnome.crypto.pgp' is not installed Edit: I fixed the seahorse GUI by manually downloading and reinstalling gnome-keyring version from precise instead of the ppa. However, I still cannot sign packages.

    Read the article

  • Don’t learn SSDT, learn about your databases instead

    - by jamiet
    Last Thursday I presented my session “Introduction to SSDT” at the SQL Supper event held at the offices of 7 Digital (loved the samosas, guys). I did my usual spiel, tour of the IDE, connected development, declarative database development yadda yadda yadda… and at the end asked if there were any questions. One gentleman in attendance (sorry, can’t remember your name) raised his hand and stated that by attempting to evangelise all of the features I’d missed the single biggest benefit of SSDT, that it can tell you stuff about database that you didn’t already know. I realised that he was dead right. SSDT allows you to import your whole database schema into a new project and it will instantly give you a list of errors and/or warnings pertaining to the objects in your database. Invalid references (e.g a long-forgotten stored procedure that refers to a non-existent column), unnecessary 3-part naming, incorrect case usage, syntax errors…it’ll tell you about all of ‘em! Turn on static code analysis (this article shows you how) and you’ll learn even more such as any stored procedures that begin with “sp_”, WHERE clauses that will kill performance, use of @@IDENTITY instead of SCOPE_IDENTITY(), use of deprecated syntax, implicit casts etc…. the list goes on and on. I urge you to download and install SSDT (takes a few minutes, its free and you don’t need SQL Server or Visual Studio pre-installed), start a new project: right-click on your new project and import from your database: and see what happens: You may be surprised what you discover. Let me know in the comments below what results you get, total number of objects, number of errors/warnings, I’d be interested to know! @Jamiet

    Read the article

  • Storing SCA Metadata in the Oracle Metadata Services Repository by Nicolás Fonnegra Martinez and Markus Lohn

    - by JuergenKress
    The advantages of using the Oracle Metadata Services Repository as a central storage for the metadata. SCA has been available since the release of the Oracle SOA Suite 11g. This technology combines and orchestrates several SOA components inside an SCA composite, making design, development, deployment, and maintenance easier. SCA development is metadata-driven, meaning that metadata artifacts, such as Web Services Description Language (WSDL), XML Schema Definition (XSD), XML, others, define the composite's behavior. With the increased number of composites and the dependencies among them, it became necessary to manage all the metadata in an adequate way. This article will address the advantages of using the Oracle Metadata Services (MDS) repository as a central storage for the metadata. The MDS repository is a central part of the Oracle Fusion Middleware landscape, managing the metadata for several technologies, such as Oracle Application Development Framework (Oracle ADF), Oracle WebCenter, and the Oracle SOA Suite. This article is divided into three parts. The first part provides an overview of SCA and MDS. The second part describes some MDS tasks that help in the management of the SCA metadata files inside the repository. The third part shows how to develop SCA composites in combination with an MDS repository. Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SCA Metadata. Metadata Services Repository,Nicolás Fonnegra Martinez,Markus Lohn,SOA Community,Oracle SOA,Oracle BPM,BPM,Community,OPN,Jürgen Kress

    Read the article

  • A dacpac limitation – Deploy dacpac wizard does not understand SqlCmd variables

    - by jamiet
    Since the release of SQL Server 2012 I have become a big fan of using dacpacs for deploying SQL Server databases (for reasons that I will explain some other day) and I chose to use a dacpac to distribute my recently announced utility sp_ssiscatalog (read: Introducing sp_ssiscatalog (v1.0.0.0)). Unfortunately if you read that blog post you may have taken note of the following: Ordinarily a dacpac can be deployed to a SQL Server from SSMS using the Deploy Dacpac wizard however in this case there is a limitation. Due to sp_ssiscatalog referring to objects in the SSIS Catalog (which it has to do of course) the dacpac contains a SqlCmd variable to store the name of the database that underpins the SSIS Catalog; unfortunately the Deploy Dacpac wizard in SSMS has a rather gaping limitation in that it cannot deploy dacpacs containing SqlCmd variables. I think it is worth calling out this limitation separately in this blog post because its a limitation that all dacpac users need to be aware of. If you try and deploy the dacpac containing sp_ssiscatalog using the wizard in SSMS then this is what you will see: TITLE: Microsoft SQL Server Management Studio ------------------------------ Could not deploy package. (Microsoft.SqlServer.Dac) ------------------------------ ADDITIONAL INFORMATION: Missing values for the following SqlCmd variables:SSISDB. (Microsoft.Data.Tools.Schema.Sql) ------------------------------ BUTTONS: OK ------------------------------ The message is quite correct. The SSDT DB project that I used to build this dacpac *does* have a SqlCmd variable in it called SSISDB: Quite simply, the Dac Deployment wizard in SSMS is not capable of deploying such dacpacs. Your only option for deploying such dacpacs is to use the command-line tool sqlpackage.exe. Generally I use sqlpackage.exe anyway (which is why it has taken me months to encounter the aforementioned problem) and have found it preferable to using a GUI-based wizard. Your mileage may vary. @Jamiet

    Read the article

  • T-SQL Tuesday #33: Trick Shots: Undocumented, Underdocumented, and Unknown Conspiracies!

    - by Most Valuable Yak (Rob Volk)
    Mike Fal (b | t) is hosting this month's T-SQL Tuesday on Trick Shots.  I love this choice because I've been preoccupied with sneaky/tricky/evil SQL Server stuff for a long time and have been presenting on it for the past year.  Mike's directives were "Show us a cool trick or process you developed…It doesn’t have to be useful", which most of my blogging definitely fits, and "Tell us what you learned from this trick…tell us how it gave you insight in to how SQL Server works", which is definitely a new concept.  I've done a lot of reading and watching on SQL Server Internals and even attended training, but sometimes I need to go explore on my own, using my own tools and techniques.  It's an itch I get every few months, and, well, it sure beats workin'. I've found some people to be intimidated by SQL Server's internals, and I'll admit there are A LOT of internals to keep track of, but there are tons of excellent resources that clearly document most of them, and show how knowing even the basics of internals can dramatically improve your database's performance.  It may seem like rocket science, or even brain surgery, but you don't have to be a genius to understand it. Although being an "evil genius" can help you learn some things they haven't told you about. ;) This blog post isn't a traditional "deep dive" into internals, it's more of an approach to find out how a program works.  It utilizes an extremely handy tool from an even more extremely handy suite of tools, Sysinternals.  I'm not the only one who finds Sysinternals useful for SQL Server: Argenis Fernandez (b | t), Microsoft employee and former T-SQL Tuesday host, has an excellent presentation on how to troubleshoot SQL Server using Sysinternals, and I highly recommend it.  Argenis didn't cover the Strings.exe utility, but I'll be using it to "hack" the SQL Server executable (DLL and EXE) files. Please note that I'm not promoting software piracy or applying these techniques to attack SQL Server via internal knowledge. This is strictly educational and doesn't reveal any proprietary Microsoft information.  And since Argenis works for Microsoft and demonstrated Sysinternals with SQL Server, I'll just let him take the blame for it. :P (The truth is I've used Strings.exe on SQL Server before I ever met Argenis.) Once you download and install Strings.exe you can run it from the command line.  For our purposes we'll want to run this in the Binn folder of your SQL Server instance (I'm referencing SQL Server 2012 RTM): cd "C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn" C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn> strings *sql*.dll > sqldll.txt C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn> strings *sql*.exe > sqlexe.txt   I've limited myself to DLLs and EXEs that have "sql" in their names.  There are quite a few more but I haven't examined them in any detail. (Homework assignment for you!) If you run this yourself you'll get 2 text files, one with all the extracted strings from every SQL DLL file, and the other with the SQL EXE strings.  You can open these in Notepad, but you're better off using Notepad++, EditPad, Emacs, Vim or another more powerful text editor, as these will be several megabytes in size. And when you do open it…you'll find…a TON of gibberish.  (If you think that's bad, just try opening the raw DLL or EXE file in Notepad.  And by the way, don't do this in production, or even on a running instance of SQL Server.)  Even if you don't clean up the file, you can still use your editor's search function to find a keyword like "SELECT" or some other item you expect to be there.  As dumb as this sounds, I sometimes spend my lunch break just scanning the raw text for anything interesting.  I'm boring like that. Sometimes though, having these files available can lead to some incredible learning experiences.  For me the most recent time was after reading Joe Sack's post on non-parallel plan reasons.  He mentions a new SQL Server 2012 execution plan element called NonParallelPlanReason, and demonstrates a query that generates "MaxDOPSetToOne".  Joe (formerly on the Microsoft SQL Server product team, so he knows this stuff) mentioned that this new element was not currently documented and tried a few more examples to see what other reasons could be generated. Since I'd already run Strings.exe on the SQL Server DLLs and EXE files, it was easy to run grep/find/findstr for MaxDOPSetToOne on those extracts.  Once I found which files it belonged to (sqlmin.dll) I opened the text to see if the other reasons were listed.  As you can see in my comment on Joe's blog, there were about 20 additional non-parallel reasons.  And while it's not "documentation" of this underdocumented feature, the names are pretty self-explanatory about what can prevent parallel processing. I especially like the ones about cursors – more ammo! - and am curious about the PDW compilation and Cloud DB replication reasons. One reason completely stumped me: NoParallelHekatonPlan.  What the heck is a hekaton?  Google and Wikipedia were vague, and the top results were not in English.  I found one reference to Greek, stating "hekaton" can be translated as "hundredfold"; with a little more Wikipedia-ing this leads to hecto, the prefix for "one hundred" as a unit of measure.  I'm not sure why Microsoft chose hekaton for such a plan name, but having already learned some Greek I figured I might as well dig some more in the DLL text for hekaton.  Here's what I found: hekaton_slow_param_passing Occurs when a Hekaton procedure call dispatch goes to slow parameter passing code path The reason why Hekaton parameter passing code took the slow code path hekaton_slow_param_pass_reason sp_deploy_hekaton_database sp_undeploy_hekaton_database sp_drop_hekaton_database sp_checkpoint_hekaton_database sp_restore_hekaton_database e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\hkproc.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\matgen.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\matquery.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\sqlmeta.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\resultset.cpp Interesting!  The first 4 entries (in red) mention parameters and "slow code".  Could this be the foundation of the mythical DBCC RUNFASTER command?  Have I been passing my parameters the slow way all this time? And what about those sp_xxxx_hekaton_database procedures (in blue)? Could THEY be the secret to a faster SQL Server? Could they promise a "hundredfold" improvement in performance?  Are these special, super-undocumented DIB (databases in black)? I decided to look in the SQL Server system views for any objects with hekaton in the name, or references to them, in hopes of discovering some new code that would answer all my questions: SELECT name FROM sys.all_objects WHERE name LIKE '%hekaton%' SELECT name FROM sys.all_objects WHERE object_definition(OBJECT_ID) LIKE '%hekaton%' Which revealed: name ------------------------ (0 row(s) affected) name ------------------------ sp_createstats sp_recompile sp_updatestats (3 row(s) affected)   Hmm.  Well that didn't find much.  Looks like these procedures are seriously undocumented, unknown, perhaps forbidden knowledge. Maybe a part of some unspeakable evil? (No, I'm not paranoid, I just like mysteries and thought that punching this up with that kind of thing might keep you reading.  I know I'd fall asleep without it.) OK, so let's check out those 3 procedures and see what they reveal when I search for "Hekaton": sp_createstats: -- filter out local temp tables, Hekaton tables, and tables for which current user has no permissions -- Note that OBJECTPROPERTY returns NULL on type="IT" tables, thus we only call it on type='U' tables   OK, that's interesting, let's go looking down a little further: ((@table_type<>'U') or (0 = OBJECTPROPERTY(@table_id, 'TableIsInMemory'))) and -- Hekaton table   Wellllll, that tells us a few new things: There's such a thing as Hekaton tables (UPDATE: I'm not the only one to have found them!) They are not standard user tables and probably not in memory UPDATE: I misinterpreted this because I didn't read all the code when I wrote this blog post. The OBJECTPROPERTY function has an undocumented TableIsInMemory option Let's check out sp_recompile: -- (3) Must not be a Hekaton procedure.   And once again go a little further: if (ObjectProperty(@objid, 'IsExecuted') <> 0 AND ObjectProperty(@objid, 'IsInlineFunction') = 0 AND ObjectProperty(@objid, 'IsView') = 0 AND -- Hekaton procedure cannot be recompiled -- Make them go through schema version bumping branch, which will fail ObjectProperty(@objid, 'ExecIsCompiledProc') = 0)   And now we learn that hekaton procedures also exist, they can't be recompiled, there's a "schema version bumping branch" somewhere, and OBJECTPROPERTY has another undocumented option, ExecIsCompiledProc.  (If you experiment with this you'll find this option returns null, I think it only works when called from a system object.) This is neat! Sadly sp_updatestats doesn't reveal anything new, the comments about hekaton are the same as sp_createstats.  But we've ALSO discovered undocumented features for the OBJECTPROPERTY function, which we can now search for: SELECT name, object_definition(OBJECT_ID) FROM sys.all_objects WHERE object_definition(OBJECT_ID) LIKE '%OBJECTPROPERTY(%'   I'll leave that to you as more homework.  I should add that searching the system procedures was recommended long ago by the late, great Ken Henderson, in his Guru's Guide books, as a great way to find undocumented features.  That seems to be really good advice! Now if you're a programmer/hacker, you've probably been drooling over the last 5 entries for hekaton (in green), because these are the names of source code files for SQL Server!  Does this mean we can access the source code for SQL Server?  As The Oracle suggested to Neo, can we return to The Source??? Actually, no. Well, maybe a little bit.  While you won't get the actual source code from the compiled DLL and EXE files, you'll get references to source files, debugging symbols, variables and module names, error messages, and even the startup flags for SQL Server.  And if you search for "DBCC" or "CHECKDB" you'll find a really nice section listing all the DBCC commands, including the undocumented ones.  Granted those are pretty easy to find online, but you may be surprised what those web sites DIDN'T tell you! (And neither will I, go look for yourself!)  And as we saw earlier, you'll also find execution plan elements, query processing rules, and who knows what else.  It's also instructive to see how Microsoft organizes their source directories, how various components (storage engine, query processor, Full Text, AlwaysOn/HADR) are split into smaller modules. There are over 2000 source file references, go do some exploring! So what did we learn?  We can pull strings out of executable files, search them for known items, browse them for unknown items, and use the results to examine internal code to learn even more things about SQL Server.  We've even learned how to use command-line utilities!  We are now 1337 h4X0rz!  (Not really.  I hate that leetspeak crap.) Although, I must confess I might've gone too far with the "conspiracy" part of this post.  I apologize for that, it's just my overactive imagination.  There's really no hidden agenda or conspiracy regarding SQL Server internals.  It's not The Matrix.  It's not like you'd find anything like that in there: Attach Matrix Database DM_MATRIX_COMM_PIPELINES MATRIXXACTPARTICIPANTS dm_matrix_agents   Alright, enough of this paranoid ranting!  Microsoft are not really evil!  It's not like they're The Borg from Star Trek: ALTER FEDERATION DROP ALTER FEDERATION SPLIT DROP FEDERATION   #tsql2sday

    Read the article

  • Resolving "PLS-00201: identifier 'DBMS_SYSTEM.XXXX' must be declared" Error

    - by Giri Mandalika
    Here is a failure sample. SQL set serveroutput on SQL alter package APPS.FND_TRACE compile body; Warning: Package Body altered with compilation errors. SQL show errors Errors for PACKAGE BODY APPS.FND_TRACE: LINE/COL ERROR -------- ----------------------------------------------------------------- 235/6 PL/SQL: Statement ignored 235/6 PLS-00201: identifier 'DBMS_SYSTEM.SET_EV' must be declared .. By default, DBMS_SYSTEM package is accessible only from SYS schema. Also there is no public synonym created for this package. So, the solution is to create the public synonym and grant "execute" privilege on DBMS_SYSTEM package to all database users or a specific user. eg., SQL CREATE PUBLIC SYNONYM dbms_system FOR dbms_system; Synonym created. SQL GRANT EXECUTE ON dbms_system TO APPS; Grant succeeded. - OR - SQL GRANT EXECUTE ON dbms_system TO PUBLIC; Grant succeeded. SQL alter package APPS.FND_TRACE compile body; Package body altered. Note that merely granting execute privilege is not enough -- creating the public synonym is as important to resolve this issue.

    Read the article

  • SQL SERVER – Repair a SQL Server Database Using a Transaction Log Explorer

    - by Pinal Dave
    In this blog, I’ll show how to use ApexSQL Log, a SQL Server transaction log viewer. You can download it for free, install, and play along. But first, let’s describe some disaster recovery scenarios where it’s useful. About SQL Server disaster recovery Along with database development and administration, you must work on a good recovery plan. Disasters do happen and no one’s immune. What you can do is take all actions needed to be ready for a disaster and go through it with minimal data loss and downtime. Besides creating a recovery plan, it’s necessary to have a list of steps that will be executed when a disaster occurs and to test them before a disaster. This way, you’ll know that the plan is good and viable. Testing can also be used as training for all team members, so they can all understand and execute it when the time comes. It will show how much time is needed to have your servers fully functional again and how much data you can lose in a real-life situation. If these don’t meet recovery-time and recovery-point objectives, the plan needs to be improved. Keep in mind that all major changes in environment configuration, business strategy, and recovery objectives require a new recovery plan testing, as these changes most probably induce a recovery plan changing and tweaking. What is a good SQL Server disaster recovery plan? A good SQL Server disaster recovery strategy starts with planning SQL Server database backups. An efficient strategy is to create a full database backup periodically. Between two successive full database backups, you can create differential database backups. It is essential is to create transaction log backups regularly between full database backups. Keep in mind that transaction log backups can be created only on databases in the full recovery model. In other words, a simple, but efficient backup strategy would be a full database backup every night, a transaction log backup every hour, or every 15 minutes. The frequency depends on how much data you can afford to lose and how busy the database is. Another option, instead of creating a full database backup every night, is to create a full database backup once a week (e.g. on Friday at midnight) and differential database backup every night until next Friday when you will create a full database backup again. Once you create your SQL Server database backup strategy, schedule the backups. You can do that easily using SQL Server maintenance plans. Why are transaction logs important? Transaction log backups contain transactions executed on a SQL Server database. They provide enough information to undo and redo the transactions and roll back or forward the database to a point in time. In SQL Server disaster recovery situations, transaction logs enable to repair a SQL Server database and bring it to the state before the disaster. Be aware that even with regular backups, there will be some data missing. These are the transactions made between the last transaction log backup and the time of the disaster. In some situations, to repair your SQL Server database it’s not necessary to re-create the database from its last backup. The database might still be online and all you need to do is roll back several transactions, such as wrong update, insert, or delete. The restore to a point in time feature is available in SQL Server, but for large databases, it is very time-consuming, as SQL Server first restores a full database backup, and then restores transaction log backups, one after another, up to the recovery point. During that time, the database is unavailable. This is where a SQL Server transaction log viewer can help. For optimal recovery, besides having a database in the full recovery model, it’s important that you haven’t manually truncated the online transaction log. This ensures that all transactions made after the last transaction log backup are still in the online transaction log. All you have to do is read and replay them. How to read a SQL Server transaction log? SQL Server doesn’t provide an option to read transaction logs. There are several SQL Server commands and functions that read the content of a transaction log file (fn_dblog, fn_dump_dblog, and DBCC PAGE), but they are undocumented. They require T-SQL knowledge, return a large number of not easy to read and understand columns, sometimes in binary or hexadecimal format. Another challenge is reading UPDATE statements, as it’s necessary to match it to a value in the MDF file. When you finally read the transactions executed, you have to create a script for it. How to easily repair a SQL database? The easiest solution is to use a transaction log reader that will not only read the transactions in the transaction log files, but also automatically create scripts for the read transactions. In the following example, I will show how to use ApexSQL Log to repair a SQL database after a crash. If a database has crashed and both MDF and LDF files are lost, you have to rely on the full database backup and all subsequent transaction log backups. In another scenario, the MDF file is lost, but the LDF file is available. First, restore the last full database backup on SQL Server using SQL Server Management Studio. I’ll name it Restored_AW2014. Then, start ApexSQL Log It will automatically detect all local servers. If not, click the icon right to the Server drop-down list, or just type in the SQL Server instance name. Select the Windows or SQL Server authentication type and select the Restored_AW2014 database from the database drop-down list. When all options are set, click Next. ApexSQL Log will show the online transaction log file. Now, click Add and add all transaction log backups created after the full database backup I used to restore the database. In case you don’t have transaction log backups, but the LDF file hasn’t been lost during the SQL Server disaster, add it using Add.   To repair a SQL database to a point in time, ApexSQL Log needs to read and replay all the transactions in the transaction log backups (or the LDF file saved after the disaster). That’s why I selected the Whole transaction log option in the Filter setup. ApexSQL Log offers a range of various filters, which are useful when you need to read just specific transactions. You can filter transactions by the time of the transactions, operation type (e.g. to read only data inserts), table name, SQL Server login that made the transaction, etc. In this scenario, to repair a SQL database, I’ll check all filters and make sure that all transactions are included. In the Operations tab, select all schema operations (DDL). If you omit these, only the data changes will be read so if there were any schema changes, such as a new function created, or an existing table modified, they will be ignored and database will not be properly repaired. The data repair for modified tables will fail. In the Tables tab, I’ll make sure all tables are selected. I will uncheck the Show operations on dropped tables option, to reduce the number of transactions. Click Next. ApexSQL Log offers three options. Select Open results in grid, to get a user-friendly presentation of the transactions. As you can see, details are shown for every transaction, including the old and new values for updated columns, which are clearly highlighted. Now, select them all and then create a redo script by clicking the Create redo script icon in the menu.   For a large number of transactions and in a critical situation, when acting fast is a must, I recommend using the Export results to file option. It will save some time, as the transactions will be directly scripted into a redo file, without showing them in the grid first. Select Generate reconstruction (REDO) script , change the output path if you want, and click Finish. After the redo T-SQL script is created, ApexSQL Log shows the redo script summary: The third option will create a command line statement for a batch file that you can use to schedule execution, which is not really applicable when you repair a SQL database, but quite useful in daily auditing scenarios. To repair your SQL database, all you have to do is execute the generated redo script using an integrated developer environment tool such as SQL Server Management Studio or any other, against the restored database. You can find more information about how to read SQL Server transaction logs and repair a SQL database on ApexSQL Solution center. There are solutions for various situations when data needs to be recovered, restored, or transactions rolled back. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • .NET 4.5 now supported with Windows Azure Web Sites

    - by ScottGu
    This week we finished rolling out .NET 4.5 to all of our Windows Azure Web Site clusters.  This means that you can now publish and run ASP.NET 4.5 based apps, and use  .NET 4.5 libraries and features (for example: async and the new spatial data-type support in EF), with Windows Azure Web Sites.  This enables a ton of really great capabilities - check out Scott Hanselman’s great post of videos that highlight a few of them. Visual Studio 2012 includes built-in publishing support to Windows Azure, which makes it really easy to publish and deploy .NET 4.5 based sites within Visual Studio (you can deploy both apps + databases).  With the Migrations feature of EF Code First you can also do incremental database schema updates as part of publishing (which enables a really slick automated deployment workflow). Each Windows Azure account is eligible to host 10 free web-sites using our free-tier.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today. In the next few days we’ll also be releasing support for .NET 4.5 and Windows Server 2012 with Windows Azure Cloud Services (Web and Worker Roles) – together with some great new Azure SDK enhancements.  Keep an eye out on my blog for details about these soon. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • SQL SERVER – How to Set Variable and Use Variable in SQLCMD Mode

    - by Pinal Dave
    Here is the question which I received the other day on SQLAuthority Facebook page. Social media is a wonderful thing and I love the active conversation between blog readers and myself – actually I think social media adds lots of human factor to any conversation. Here is the question - “I am using sqlcmd in SSMS – I am not sure how to declare variable and pass it, for example I have a database and it has table, how can I make the table variable dynamic and pass different value everytime?” Fantastic question, and here is its very simple answer. First of all, enable sqlcmd mode in SQL Server Management Studio as described in following image. Now in query editor type following SQL. :SETVAR DatabaseName “AdventureWorks2012″ :SETVAR SchemaName “Person” :SETVAR TableName “EmailAddress“ USE $(DatabaseName); SELECT * FROM $(SchemaName).$(TableName); Note that I have set the value of the database, schema and table as a sqlcmd variable and I am executing the query using the same parameters. Well, that was it, sqlcmd is a very simple language to master and it also aids in doing various tasks easily. If you have any other sqlcmd tips, please leave a comment and I will publish it with due credit. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology Tagged: sqlcmd

    Read the article

  • Learning about sparse columns and filtered indexes

    - by Christian
    I’ve been brushing up on sparse columns and filtered indexes recently and two resources stood out for me as indispensable so I’d thought I’d share them. Those of you studying for Microsoft Certified Master: SQL Server will no doubt have found the Readiness Videos published on TechNet and Kimberley Tripp’s (Blog|Twitter) webcast in this series on Sparse columns provides an excellent resource showing different schema designs and specifically where sparse columns fit in. MCM Readiness Video on Sparse columns by Kimberly Tripp http://technet.microsoft.com/en-us/sqlserver/ff977043 The second resource is a session from this years PASS Summit (2010) by Don Vilen (Twitter) called Filtered Indexes, Sparse Columns: Together, Separately (AD203). I thought this session was great and in combination with Kimberly’s webcast provides the perfect background for anyone wanting to learn this topic, especially for those studying for the MCM knowledge exam. If you attended PASS you should have a login to stream Don’s session but if not you can buy the official DVD’s from http://www.sqlpass.org The DVDs are worthy investment considering how much material you get access to!   Regards, Christian Christian Bolton  - MCA, MCM, MVP Technical Director http://coeo.com - SQL Server Consulting & Managed Services twitter: @christianbolton

    Read the article

< Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >