Search Results

Search found 10242 results on 410 pages for 'stored proc'.

Page 112/410 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • SQL help - find the table that has 'somefieldId' as the primary key?

    - by Kettenbach
    Hello All, How can I search my sql database for a table that contains a field 'tiEntityId'. This field is referenced in a stored procedure, but I am unable to identify which table this id is a primary key for? Any suggestions? I currently look through stored procedure definitions for references to text by using something like this Declare @Search varchar(255) SET @Search='[10.10.100.50]' SELECT DISTINCT o.name AS Object_Name,o.type_desc FROM sys.sql_modules m INNER JOIN sys.objects o ON m.object_id=o.object_id WHERE m.definition Like '%'+@Search+'%' ORDER BY 2,1 Any SQL guru's out there know what I need to use to find the table that contains the field, 'preferably the table where that field is the Primary Key. Thanks so much for any tips. Cheers, ~ck in San Diego

    Read the article

  • Apache basic auth, mod_authn_dbd and password salt

    - by Cristian Vrabie
    Using Apache mod_auth_basic and mod_authn_dbd you can authenticate a user by looking up that user's password in the database. I see that working if the password is held in clear, but what if we use a random string as a salt (also stored in the database) then store the hash of the concatenation? mod_authn_dbd requires you to specify a query to select that password not to decide if the user is authenticated of not. So you cannot use that query to concatenate the user provided password with the salt then compare with the stored hash. AuthDBDUserRealmQuery "SELECT password FROM authn WHERE user = %s AND realm = %s" Is there a way to make this work?

    Read the article

  • ASP.Net delete record audit trigger

    - by Germ
    Suppose you have the following... A ASP.NET web application that calls a stored procedure to delete a record. The table has a trigger on it that will insert an audit entry each time a record is deleted. I want to be able to record in the audit entry the person who deleted the record. What would be the best way to go about achieving this? I know I could remove the trigger and have the delete stored procedure insert the audit entry prior to deleting but are there other recommeded alternative?

    Read the article

  • Sessions not persisting between requests

    - by klonq
    My session objects are only stored within the request scope on google app engine and I can't figure out how to persist objects between requests. The docs are next to useless on this matter and I can't find anyone who's experienced a similar problem. Please help. When I store session objects in the servlet and forward the request to a JSP using: getServletContext().getRequestDispatcher("/example.jsp").forward(request,response); Everything works like it should. But when I store objects to the session and redirect the request using: response.sendRedirect("/example/url"); The session objects are lost to the ether. In fact when I dump session key/value pairs on new requests there is absolutely nothing, session objects only appear within the request scope of servlets which create session objects. It appears to me that the objects are not being written to Memcache or Datastore. In terms of configuring sessions for my application I have set <sessions-enabled>true</sessions-enabled> In appengine-web.xml. Is there anything else I am missing? The single paragraph of documentation on sessions also notes that only objects which implement Serializable can be stored in the session between requests. I have included an example of the code which is not working below. The obvious solution is to not use redirects, and this might be ok for the example given below but some application data does need to be stored in the session between requests so I need to find a solution to this problem. EXAMPLE: The class FlashMessage gives feedback to the user from server-side operations. if (email.send()) { FlashMessage flash = new FlashMessage(FlashMessage.SUCCESS, "Your message has been sent."); session.setAttribute(FlashMessage.SESSION_KEY, flash); // The flash message will not be available in the session object in the next request response.sendRedirect(URL.HOME); } else { FlashMessage flash = new FlashMessage(FlashMessage.ERROR, FlashMessage.INVALID_FORM_DATA); session.setAttribute(FlashMessage.SESSION_KEY, flash); // The flash message is displayed without problem getServletContext().getRequestDispatcher(Templates.CONTACT_FORM).forward(request,response); } FlashMessage.java import java.io.Serializable; public class FlashMessage implements Serializable { private static final long serialVersionUID = 8109520737272565760L; // I have tried using different, default and no serialVersionUID public static final String SESSION_KEY = "flashMessage"; public static final String ERROR = "error"; public static final String SUCCESS = "success"; public static final String INVALID_FORM_DATA = "Your request failed to validate."; private String message; private String type; public FlashMessage (String type, String message) { this.type = type; this.message = message; } public String display(){ return "<div id='flash' class='" + type + "'>" + message + "</div>"; } }

    Read the article

  • Migrating from CVS to Mercurial - how to handle cross-repo symbolic links?

    - by NVRAM
    I have a project that is stored in CVS as numerous modules/repositories. In several of the modules the CVS tree has symbolic links to the files in another tree. For example, the internal support tools have links to binary files (DLL, EXE) that are created and stored in the C# module. In all cases, the files are modified only in in the module where the files exist and are treated as read-only in the tree where the symbolic link exists. More often than not, the files are pulled to machines running MSWindows so the use of symbolic links on the developer machine is not an option. My question is this: Is there a mechanism in Mercurial that can provide the same capabilities?

    Read the article

  • direct access to vector elements similar to arrays

    - by mixm
    hi. im currently creating a tile based game, where elements of the games are placed in four different vectors (since there are multiple game objects with different properties, hence stored in different vectors). these game elements themselves contain x and y coordinates similar to how they are stored in a two dimensional array. i was wondering if there was a way to access these vector elements similar to two dimensional array access (currently i am implementing an for loop to cycle the elements while comparing its coordinates). this kinda sucks when i need to refresh my display at every game cycle (since the large number of comparisons and loops). im implementing this in java btw.

    Read the article

  • Storing SQL queries in Table in sql server

    - by Rohit
    We have multiple jobs in our system.These jobs are listed in a grid. We have 3 different user types (usertypeid 1,2,3). For each user listing is different and he can filter listing by selecting view from a dropdown. ViewName in the below table is the view which needs to be displayed. To achieve this functionality, a fellow developer has created the following table structure and stored sql fragments in SQLExpression in the below table. According to me the query should not be stored in database. What are the pros and cons of this approach and what are the available alternatives? JobListingViewID ViewName SQLExpression UserTypeID 3 All Jobs 1 = 1 3 4 Error Jobs JobStatusID IN ( 2 ) 1 5 Error Jobs JobStatusID IN ( 2 ) 2 6 Error Jobs JobStatusID IN ( 2 ) 3 7 Speech JobStatusID IN ( 1, 3, 8 ) 1

    Read the article

  • DB Design Pattern - Many to many classification / categorised tagging.

    - by Robin Day
    I have an existing database design that stores Job Vacancies. The "Vacancy" table has a number of fixed fields across all clients, such as "Title", "Description", "Salary range". There is an EAV design for "Custom" fields that the Clients can setup themselves, such as, "Manager Name", "Working Hours". The field names are stored in a "ClientText" table and the data stored in a "VacancyClientText" table with VacancyId, ClientTextId and Value. Lastly there is a many to many EAV design for custom tagging / categorising the vacancies with things such as Locations/Offices the vacancy is in, a list of skills required. This is stored as a "ClientCategory" table listing the types of tag, "Locations, Skills", a "ClientCategoryItem" table listing the valid values for each Category, e.g., "London,Paris,New York,Rome", "C#,VB,PHP,Python". Finally there is a "VacancyClientCategoryItem" table with VacancyId and ClientCategoryItemId for each of the selected items for the vacancy. There are no limits to the number of custom fields or custom categories that the client can add. I am now designing a new system that is very similar to the existing system, however, I have the ability to restrict the number of custom fields a Client can have and it's being built from scratch so I have no legacy issues to deal with. For the Custom Fields my solution is simple, I have 5 additional columns on the Vacancy Table called CustomField1-5. This removes one of the EAV designs. It is with the tagging / categorising design that I am struggling. If I limit a client to having 5 categories / types of tag. Should I create 5 tables listing the possible values "CustomCategoryItems1-5" and then an additional 5 many to many tables "VacancyCustomCategoryItem1-5" This would result in 10 tables performing the same storage as the three tables in the existing system. Also, should (heaven forbid) the requirements change in that I need 6 custom categories rather than 5 then this will result in a lot of code change. Therefore, can anyone suggest any DB Design Patterns that would be more suitable to storing such data. I'm happy to stick with the EAV approach, however, the existing system has come across all the usual performance issues and complex queries associated with such a design. Any advice / suggestions are much appreciated. The DBMS system used is SQL Server 2005, however, 2008 is an option if required for any particular pattern.

    Read the article

  • Reading Values Returned by SQLDataSource Before Binding to FormView

    - by peter.newhook
    I have a FormView that shows posts by users. I'd like to add Edit and Delete commands to the post to let the original author edit or delete their post. I'd like these commands to be available to only the author. The FormView is populated by a SqlDataSource that uses a stored procedure. I was thinking I would set the Edit and Delete hyperlink to Visible=False, then compare the currently signed in user guid to the guid of the post's author, and make the hyperlinks visible if the two guids are the same. I've tried using the Selected event of the SqlDataSource to capture the guid (which is returned by the stored procedure) however I can't find way to get the values returned by this datasource. How do I access the values returned by a SqlDataSource before they get databound?

    Read the article

  • Rails: has_many association with a table in another database and without foreign key

    - by Fernando
    Here is my situation. I have model called Account. An account can have one or more contracts. The problem is that i'm dealing with a legacy application and each account's contracts are stored in a different database. Example: Account 1's contract are in account1_db.contracts. Account 2's contract are in account2_db.contracts. The database name is a field stored in accounts table. How can i make rails association work with this? This is a legacy PHP application and i simply can't change it to store everything in one table. I need to make it work somehow. I tried this, but it didn't worked: has_many :contracts, :conditions => [lambda{ Contract.set_table_name(self.database + '.contracts'); return '1' }] Any ideas?

    Read the article

  • per process configurable core dump directory

    - by Hanno Stock
    Is there a way to configure the directory where core dump files are placed for a specific process? I have a daemon process written in C++ for which I would like to configure the core dump directory. Optionally the filename pattern should be configurable, too. I know about /proc/sys/kernel/core_name_format, however this would change the pattern and directory structure globally. Apache has the directive CoreDumpDirectory - so it seems to be possible.

    Read the article

  • Automatically CONCATENATE text on data entry

    - by Bill T
    I am a newbie and need help. I have a table called "Employees". It has 2 fields [number] and [encode]. I want to automatically take whatever number is entered into [number] and store it in [encode] so that it is preceded by the appropriate amount of 0's to always make 12 digits. Example: user enters '123' into [number], '000000000123' is automatically stored in [encode] user enters '123456789' into [number], '000123456789' is automatically stored in [encode] I think i want to write a trigger to accomplish this. I think that would make it happen at the time of data entry. is that right? The main idea is would be something like this: variable1 = LENGTH [number] variable2 = REPEAT (0,12-variable1) variable3 = CONCATENATE (variable2, [number]) [encode] = variable3 I just don't know enough to make this happen ANY help would be FANTASTIC. I have SQL-SERVER 2005 and both fields are text

    Read the article

  • writing to an ioport resulting in segfaults...

    - by Sniperchild
    I'm writing for an atmel at91sam9260 arm 9 cored single board computer [glomation gesbc9260] Using request_mem_region(0xFFFFFC00,0x100,"name"); //port range runs from fc00 to fcff that works fine and shows up in /proc/iomem then i try to write to the last bit of the port at fc20 with writel(0x1, 0xFFFFFC20); and i segfault...specifically "unable to handle kernel paging request at virtual address fffffc20. I'm of the mind that i'm not allocating the right memory space... any helpful insight would be great...

    Read the article

  • What am I doing wrong? (Simple Assembly Loop)

    - by sunnyohno
    It won't let me post the picture. Btw, Someone from Reddit.programming sent me over here. So thanks! TITLE MASM Template ; Description ; ; Revision date: INCLUDE Irvine32.inc .data myArray BYTE 10, 20, 30, 40, 50, 60, 70, 80, 90, 100 .code main PROC call Clrscr mov esi, OFFSET myArray mov ecx, LENGTHOF myArray mov eax, 0 L1: add eax, [esi] inc esi loop L1 call WriteInt exit main ENDP END main Results in: -334881242

    Read the article

  • Best way to perform DELETE that uses ids from a SELECT statement in MYSQL

    - by Aglystas
    I'm working on a stored procedure, that needs to delete specific rows based on a timestamp. Here's what I was going to use until I found out you can't include a select clause in the delete statement if they are both working on the same table. DELETE FROM product WHERE merchant_id = 2 AND product_id IN (SELECT product_id FROM product WHERE merchant_id = 1 AND timestamp_updated > 1275062558); Is there a good way to handle this within a stored procedure. Normally I would just throw the logic to build the product_id list in php, but I'm trying to have all the processing done on the data server.

    Read the article

  • How to store and compare time-zone sensitive times

    - by Chad Moran
    I have a data structure where an entity has times stored as an int (minutes into the day) for fast comparison. The entity also has a Foreign Key reference back to a TimeZone table which contains the .NET CLR ID Name and it's Standard Time/Daylight Time acronyms. Since this information is stored as time-zone insensitive - I was wondering how in LINQ to SQL I could convert this into a UTC DateTime for comparison against other times that will be in UTC. Just to be clear this conversion has to be done server-side so that I can execute filtering on the SQL Server and not the client. I am using .NET 3.5 SP1 and SQL Server 2008.

    Read the article

  • Syncing a table records with a Service response frequently

    - by Karthik Dheeraj
    I am requesting data from a service whose response in stored in a database.First, I have an empty table, whenever I make my very first request the records from the service comes to my database table. from now, whenever I make second request, the service will provide me some records which may be same as my first response, may be new records, may be updated records etc. my query is to how to update my table with respect to the responses coming from the service during my second request on-wards? so that Unchanged records will remain same, New records will be added, updated records will be updated.Do I need to write any stored procedure on my DB or any workaround ?what might be the scenario if I use Nomysql DB's like mongo DB ? Thanks In Advance.

    Read the article

  • What was the most refreshingly honest non-technical comment you saw in the code?

    - by DVK
    OK, so we all saw the lists of "funny" or "bad" comments. However, today, when maintaining an old stored proc, I stumbled upon a comment which I couldn't classify other than "refreshingly brutally honest", left by a previous maintainer around a really freakish (both performance and readability-wise) page-long query: -- Feel free to optimize this if you can understand what it means So, in the first (and hopefully only) poll type question in my history of Stack Overflow, I'd like to hear some other "refreshingly brutally honest" code comments you encountered or written.

    Read the article

  • whats best practice for Log Truncation in SQL Server?

    - by kacalapy
    i have a production DB in SQL server and wanted to put the final touches after the functionality is completed. prior to shipping it out i want to make sure i have some clean up in the SQL server DB and truncate and shrink log files? can i have a nightly job run to truncate logs and shrink files? this is what i have so far: ALTER proc [dbo].[UTIL_ShrinkDB_TruncateLog] as -- exec sp_helpfile BACKUP LOG PMIS WITH TRUNCATE_ONLY DBCC SHRINKFILE (PMIS, 1) DBCC SHRINKFILE (PMIS, 1)

    Read the article

  • building list of child objects inside main object

    - by Asdfg
    I have two tables like this: Category: Id Name ------------------ 1 Cat1 2 Cat2 Feature: Id Name CategoryId -------------------------------- 1 F1 1 2 F2 1 3 F3 2 4 F4 2 5 F5 2 In my .Net classes, i have two POCO classes like this: public class Category { public int Id {get;set;} public int Name {get;set;} public IList<Feature> Features {get;set;} } public class Feature { public int Id {get;set;} public int CategoryId {get;set;} public int Name {get;set;} } I am using a stored proc that returns me a result set by joining these 2 tables. This is how my Stored Proc returns the result set. SELECT c.CategoryId, c.Name Category, f.FeatureId, f.Name Feature FROM Category c INNER JOIN Feature f ON c.CategoryId = f.CategoryId ORDER BY c.Name --Resultset produced by the above query CategoryId CategoryName FeatureId FeatureName --------------------------------------------------- 1 Cat1 1 F1 1 Cat1 2 F2 2 Cat2 3 F3 2 Cat2 4 F4 2 Cat2 5 F5 Now if i want to build the list of categories in my .Net code, i have to loop thru the result set and add features unless the category changes. This is how my .Net code looks like that builds Categories and Features. List<Category> categories = new List<Category>(); Int32 lastCategoryId = 0; Category c = new Category(); using (SqlDataReader reader = cmd.ExecuteReader()) { while (reader.Read()) { //Check if the categoryid is same as previous one. //If Not, add new category. //If Yes, dont add the category. if (lastCategoryId != Convert.ToInt32(reader["CategoryId"])) { c = new Category { Id = Convert.ToInt32(reader["CategoryId"]), Name = reader["CategoryName"].ToString() }; c.Features = new List<Feature>(); categories.Add(c); } lastCategoryId = Convert.ToInt32(reader["CategoryId"]); //Add Feature c.Features.Add(new Feature { Name = reader["FeatureName"].ToString(), Id = Convert.ToInt32(reader["FeatureId"]) }); } return categories; } I was wondering if there is a better way to do build the list of Categories?

    Read the article

  • Oracle TIMESTAMP w/ timezone data type confusion

    - by JuiceBox1337
    When would you use TIMESTAMP w/ timezone as opposed to TIMESTAMP w/ local time zone? When data is stored in a column of data type TIMESTAMP w/ local tz, the data is normalized to the database time zone, and the time zone displacement is not stored as part of the column data. When users retrieve the data, Oracle returns it in the users' local session time zone. Isn't that much more useful? I can't think of a reason why I'd want to use TIMESTAMP w/ timezone and get back some gobble gook with a UTC offset.

    Read the article

  • Upload File to Database in ColdFusion

    - by George Johnston
    I simply would like to upload a file to my database using ColdFusion. I understand how to upload an image to a directory, but I would like to place it directly in the database. I have set a database field to varbinary(MAX) to accept the image and have the stored procedure to insert it. Currently my code for uploading the image to my file system is: <cfif isdefined("form.FileUploadImage")> <cffile action="upload" filefield="FileUploadImage" destination="#uploadfolder#" nameconflict="overwrite" accept="image/*" > </cfif> I've obviously left some of the supporting code out, but really all I need to do is get a binary representation of the file stored in memory, instead of the file system. Any experts out there that can help? Thanks, George

    Read the article

  • jpa-Primarykey relationship

    - by megala
    Hi created student entity in gogole app engine datastore using JPA. Student---Coding @Entity @Table(name="StudentPersonalDetails", schema="PUBLIC") public class StudentPersonalDetails { @Id @Column(name = "STUDENTNO") private Long stuno; @Basic @Column(name = "STUDENTNAME") private String stuname; public void setStuname(String stuname) { this.stuname = stuname; } public String getStuname() { return stuname; } public void setStuno(Longstuno) { this.stuno = stuno; } public Long getStuno() { return stuno; } public StudentPersonalDetails(Long stuno,String stuname) { this.stuno = stuno; this.stuname = stuname; } } I stored Property value as follows Stuno Stuname 1 a 2 b If i stored Again Stuno No 1 stuname z means it wont allow to insert the record But. It Overwrite the value Stuno Stuname 1 z 2 b How to solve this?

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >