Search Results

Search found 21434 results on 858 pages for 'query master'.

Page 500/858 | < Previous Page | 496 497 498 499 500 501 502 503 504 505 506 507  | Next Page >

  • Understanding Wordpress database schema - querying from 3rd party app

    - by deceze
    Is there an easy way to grab the latest posts out of a Wordpress wp_posts table using a simple SQL query? I have a Wordpress 2.9.2 installation as part of, but separate from, a larger system. It has a customized theme to look like the rest of the site but has otherwise nothing to do with it. I want to display the latest handful of headlines of posts made using Wordpress on a site of that other system. Preferably I do not want to mess around with importing any of the Wordpress library files. Looking at the database structure I can't see an easy, straight-forward query to simply get the latest revision of the latest posts. The post_status can either be "post" or "inherit", the post_type "post" or "revision" and the parent "0" or the id of the original post of a revision. I can't figure out how to reliably filter different revisions of the same post, drafts, attachments and pages out of this mess and just get the latest revision of the latest posts. I'm aware that the database schema is subject to change in subsequent versions of Wordpress, so shouldn't be relied upon, but that's a minor concern, since it's such a minor feature that could easily be fixed. If I understood how that database is supposed to work, that is.

    Read the article

  • Getting a Specified Cast is not valid while importing data from Excel using Linq to SQL

    - by niceoneishere
    This is my second post. After learning from my first post how fantastic is to use Linq to SQL, I wanted to try to import data from a Excel sheet into my SQL database. First My Excel Sheet: it contains 4 columns namely ItemNo ItemSize ItemPrice UnitsSold I have a created a database table with the following fields table name ProductsSold Id int not null identity --with auto increment set to true ItemNo VarChar(10) not null ItemSize VarChar(4) not null ItemPrice Decimal(18,2) not null UnitsSold int not null Now I created a dal.dbml file based on my database and I am trying to import the data from excel sheet to db table using the code below. Everything is happening on click of a button. private const string forecast_query = "SELECT ItemNo, ItemSize, ItemPrice, UnitsSold FROM [Sheet1$]"; protected void btnUpload_Click(object sender, EventArgs e) { var importer = new LinqSqlModelImporter(); if (fileUpload.HasFile) { var uploadFile = new UploadFile(fileUpload.FileName); try { fileUpload.SaveAs(uploadFile.SavePath); if(File.Exists(uploadFile.SavePath)) { importer.SourceConnectionString = uploadFile.GetOleDbConnectionString(); importer.Import(forecast_query); gvDisplay.DataBind(); pnDisplay.Visible = true; } } catch (Exception ex) { Response.Write(ex.Source.ToString()); lblInfo.Text = ex.Message; } finally { uploadFile.DeleteFileNoException(); } } } // Now here is the code for LinqSqlModelImporter public class LinqSqlModelImporter : SqlImporter { public override void Import(string query) { // importing data using oledb command and inserting into db using LINQ to SQL using (var context = new WSDALDataContext()) { using (var myConnection = new OleDbConnection(base.SourceConnectionString)) using (var myCommand = new OleDbCommand(query, myConnection)) { myConnection.Open(); var myReader = myCommand.ExecuteReader(); while (myReader.Read()) { context.ProductsSolds.InsertOnSubmit(new ProductsSold() { ItemNo = myReader.GetString(0), ItemSize = myReader.GetString(1), ItemPrice = myReader.GetDecimal(2), UnitsSold = myReader.GetInt32(3) }); } } context.SubmitChanges(); } } } can someone please tell me where am I making the error or if I am missing something, but this is driving me nuts. When I debugged I am getting this error when casting from a number the value must be a number less than infinity I really appreciate it

    Read the article

  • Storing float numbers as strings in android database

    - by sandis
    So I have an app where I put arbitrary strings in a database and later extract them like this: Cursor DBresult = myDatabase.query(false, Constant.DATABASE_NOTES_TABLE_NAME, new String[] {"myStuff"}, where, null, null, null, null, null); DBresult.getString(0); This works fine in all cases except for when the string looks like a float number, for example "221.123123123". After saving it to the database I can extract the database to my computer and look inside it with a DB-viewer, and the saved number is correct. However, when using cursor.getString() the string "221.123" is returned. I cant for the life of me understand how I can prevent this. I guess I could do a cursor.getDouble() on every single string to see if this gives a better result, but that feels sooo ugly and inefficient. Any suggestions? Cheers, edit: I just made a small test program. This program prints "result: 123.123", when I would like it to print "result: 123.123123123" SQLiteDatabase database = openOrCreateDatabase("databas", Context.MODE_PRIVATE, null); database.execSQL("create table if not exists tabell (nyckel string primary key);"); ContentValues value = new ContentValues(); value.put("nyckel", "123.123123123"); database.insert("tabell", null, value); Cursor result = database.query("tabell", new String[]{"nyckel"}, null, null, null, null, null); result.moveToFirst(); Log.d("TAG","result: " + result.getString(0));

    Read the article

  • Remove redundant SQL code

    - by Dave Jarvis
    Code The following code calculates the slope and intercept for a linear regression against a slathering of data. It then applies the equation y = mx + b against the same result set to calculate the value of the regression line for each row. Can the two separate sub-selects be joined so that the data and its slope/intercept are calculated without executing the data gathering part of the query twice? SELECT AVG(D.AMOUNT) as AMOUNT, Y.YEAR * ymxb.SLOPE + ymxb.INTERCEPT as REGRESSION_LINE, Y.YEAR as YEAR, MAKEDATE(Y.YEAR,1) as AMOUNT_DATE FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D, (SELECT ((avg(t.AMOUNT * t.YEAR)) - avg(t.AMOUNT) * avg(t.YEAR)) / (stddev( t.AMOUNT ) * stddev( t.YEAR )) as CORRELATION, ((sum(t.YEAR) * sum(t.AMOUNT)) - (count(1) * sum(t.YEAR * t.AMOUNT))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as SLOPE, ((sum( t.YEAR ) * sum( t.YEAR * t.AMOUNT )) - (sum( t.AMOUNT ) * sum(power(t.YEAR, 2)))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as INTERCEPT FROM ( SELECT AVG(D.AMOUNT) as AMOUNT, Y.YEAR as YEAR, MAKEDATE(Y.YEAR,1) as AMOUNT_DATE FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D WHERE $X{ IN, C.ID, CityCode } AND SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < $P{Radius} AND S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND Y.YEAR BETWEEN 1900 AND 2009 AND M.YEAR_REF_ID = Y.ID AND M.CATEGORY_ID = $P{CategoryCode} AND M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR ) t ) ymxb WHERE $X{ IN, C.ID, CityCode } AND SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < $P{Radius} AND S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND Y.YEAR BETWEEN 1900 AND 2009 AND M.YEAR_REF_ID = Y.ID AND M.CATEGORY_ID = $P{CategoryCode} AND M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR Question How do I execute the duplicate bits only once per query, instead of twice? The duplicate bit is the WHERE clause: $X{ IN, C.ID, CityCode } AND SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < $P{Radius} AND S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND Y.YEAR BETWEEN 1900 AND 2009 AND M.YEAR_REF_ID = Y.ID AND M.CATEGORY_ID = $P{CategoryCode} AND M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' Related http://stackoverflow.com/questions/1595659/how-to-eliminate-duplicate-calculation-in-sql Thank you!

    Read the article

  • Pass Linq Expression to a function

    - by Kushan Hasithe Fernando
    I want to pass a property list of a class to a function. with in the function based on property list I'm going to generate a query. As exactly same functionality in Linq Select method. Here I'm gonna implement this for Ingress Database. As an example, in front end I wanna run a select as this, My Entity Class is like this public class Customer { [System.Data.Linq.Mapping.ColumnAttribute(Name="Id",IsPrimaryKey=true)] public string Id { get; set; } [System.Data.Linq.Mapping.ColumnAttribute(Name = "Name")] public string Name { get; set; } [System.Data.Linq.Mapping.ColumnAttribute(Name = "Address")] public string Address { get; set; } [System.Data.Linq.Mapping.ColumnAttribute(Name = "Email")] public string Email { get; set; } [System.Data.Linq.Mapping.ColumnAttribute(Name = "Mobile")] public string Mobile { get; set; } } I wanna call a Select function like this, var result = dataAccessService.Select<Customer>(C=>C.Name,C.Address); then,using result I can get the Name and Address properties' values. I think my Select function should looks like this, ( *I think this should done using Linq Expression. But im not sure what are the input parameter and return type. * ) Class DataAccessService { // I'm not sure about this return type and input types, generic types. public TResult Select<TSource,TResult>(Expression<Func<TSource,TResult>> selector) { // Here using the property list, // I can get the ColumnAttribute name value and I can generate a select query. } } This is a attempt to create a functionality like in Linq. But im not an expert in Linq Expressions. There is a project call DbLinq from MIT, but its a big project and still i couldn't grab anything helpful from that. Can someone please help me to start this, or can someone link me some useful resources to read about this.

    Read the article

  • Group / User based security. Table / SQL question

    - by Brett
    Hi, I'm setting up a group / user based security system. I have 4 tables as follows: user groups group_user_mappings acl where acl is the mapping between an item_id and either a group or a user. The way I've done the acl table, I have 3 columns of note (actually 4th one as an auto-id, but that is irrelevant) col 1 item_id (item to access) col 3 user_id (user that is allowed to access) col 3 group_id (group that is allowed to access) So for example item1, peter, , item2, , group1 item3, jane, , so either the acl will give access to a user or a group. Any one line in the ACL table with either have an item - user mapping, or an item group. If I want to have a query that returns all objects a user has access to, I think I need to have a SQL query with a UNION, because I need 2 separate queries that join like.. item - acl - group - user AND item - acl - user This I guess will work OK. Is this how its normally done? Am I doing this the right way? Seems a little messy. I was thinking I could get around it by creating a single user group for each person, so I only ever deal with groups in my SQL, but this seems a little messy as well..

    Read the article

  • IIS URL Rewriting: How can I reliably keep relative paths when serving multiple files?

    - by NVRAM
    My WebApp is part CMS, and when I serve up an HTML page to the user it typically contains relative paths in a.href and img.src attributes. I currently have them accessed by urls like: ~/get-data.aspx/instance/user/page.html -- where instance indicates the particular instance for the report and "user/page.html" is a path created by an external application that generates the content. This works pretty reliably with code in the application's BeginRequest method that translates the text after ".aspx" into a query string, then uses Context.RewritePath(). So far so good, but I've just tripped over something that took me by surprise: it appears that if any of the query string ("instance/user/page.html") happens to contain a plus sign ("+") the BeginRequest method is never called, and a 404 is immediately returned to the user. So my question is two-fold: Am I correct in my belief that a "+" would cause the 404, and if so are there other things that could cause similar problems? Is there a way around that problem (perhaps a different method than BeginRequest)? Is there a better way to preserve relative URL paths for generated content than what I'm using? I'd rather not require site admins to install a 3rd party rewrite tool if I can help it.

    Read the article

  • rsyslog forward all except ldap

    - by Brian
    I have Centos 6 servers running openLDAP. In the rsyslog.conf, I forward the logs to my central server with this line: *.* @10.10.10.10:514 openldap seems incredibly chatty. I have 3 servers in a multi-master cluster. Those 3 servers generate twice as many logs as my other 80 servers combined. I have been unsuccessful in figuring out how to tell openLDAP to use a sensible log level. (we never specifically set the log level) Since these are my main authentication sources, I'm a bit hesitant to "play around" with them. Is there a way to tell rsyslog to forward everything EXCEPT LOCAL4?

    Read the article

  • Performance impact when using XML columns in a table with MS SQL 2008

    - by Sam Dahan
    I am using a simple table with 6 columns, 3 of which are of XML type, not schema-constrained. When the table reaches a size around 120,000 or 150,000 rows, I see a dramatic performance cost in doing any query in the table. For comparison, I have another table, which grows in size at about the same rate, but only contain scalar types (int, datetime, a few float columns). That table performs perfectly fine even after 200,000 rows. And by the way, I am not using XQuery on the xml columns, i am only using regular SQL query statements. Some specifics: both tables contain a DateTime field called SampleTime. a statement like (it's in a stored procedure but I show you the actual statement) SELECT MAX(sampleTime) SampleTime FROM dbo.MyRecords WHERE PlacementID=@somenumber takes 0 seconds on the table without xml columns, and anything from 13 to 20 seconds on the table with XML columns. That depends on which drive I set my database on. At the moment it sits on a different spindle (not C:) and it takes 13 seconds. Has anyone seen this behavior before, or have any hint at what I am doing wrong? I tried this with SQL 2008 EXPRESS and the full-blown SQL Server 2008, that made no difference. Oh, one last detail: I am doing this from a C# application, .NET 3.5, using SqlConnection, SqlReader, etc.. I'd appreciate some insight into that, thanks! Sam

    Read the article

  • Excessive use of Inner Join for more than 3 tables

    - by Archangel08
    Good Day, I have 4 tables on my DB (not the actual name but almost similar) which are the ff: employee,education,employment_history,referrence employee_id is the name of the foreign key from employee table. Here's the example (not actual) data: **Employee** ID Name Birthday Gender Email 1 John Smith 08-15-2014 Male [email protected] 2 Jane Doe 00-00-0000 Female [email protected] 3 John Doe 00-00-0000 Male [email protected] **Education** Employee_ID Primary Secondary Vocation 1 Westside School Westshore H.S SouthernBay College 2 Eastside School Eastshore H.S NorthernBay College 3 Northern School SouthernShore H.S WesternBay College **Employment_History** Employee_ID WorkOne StartDate Enddate 1 StarBean Cafe 12-31-2012 01-01-2013 2 Coffebucks Cafe 11-01-2012 11-02-2012 3 Latte Cafe 01-02-2013 04-05-2013 Referrence Employee_ID ReferrenceOne Address Contact 1 Abraham Lincoln Lincoln Memorial 0000000000 2 Frankie N. Stein Thunder St. 0000000000 3 Peter D. Pan Neverland Ave. 0000000000 NOTE: I've only included few columns though the rest are part of the query. And below are the codes I've been working on for 3 consecutive days: $sql=mysql_query("SELECT emp.id,emp.name,emp.birthday,emp.pob,emp.gender,emp.civil,emp.email,emp.contact,emp.address,emp.paddress,emp.citizenship,educ.employee_id,educ.elementary,educ.egrad,educ.highschool,educ.hgrad,educ.vocational,educ.vgrad,ems.employee_id,ems.workOne,ems.estartDate,ems.eendDate,ems.workTwo,ems.wstartDate,ems.wendDate,ems.workThree,ems.hstartDate,ems.hendDate FROM employee AS emp INNER JOIN education AS educ ON educ.employee_id='emp.id' INNER JOIN employment_history AS ems ON ems.employee_id='emp.id' INNER JOIN referrence AS ref ON ref.employee_id='emp.id' WHERE emp.id='$id'"); Is it okay to use INNER JOIN this way? Or should I modify my query to get the results that I wanted? I've also tried to use LEFT JOIN but still it doesn't return anything .I didn't know where did I go wrong. You see, as I have thought, I've been using the INNER JOIN in correct manner, (since it was placed before the WHILE CLAUSE). So I couldn't think of what could've possible went wrong. Do you guys have a suggestion? Thanks in advance.

    Read the article

  • Best way to dynamically get column names from oracle tables

    - by MNC
    Hi, We are using an extractor application that will export data from the database to csv files. Based on some condition variable it extracts data from different tables, and for some conditions we have to use UNION ALL as the data has to be extracted from more than one table. So to satisfy the UNION ALL condition we are using nulls to match the number of columns. Right now all the queries in the system are pre-built based on the condition variable. The problem is whenever there is change in the table projection (i.e new column added, existing column modified, column dropped) we have to manually change the code in the application. Can you please give some suggestions how to extract the column names dynamically so that any changes in the table structure do not require change in the code? My concern is the condition that decides which table to query. The variable condition is like if the condition is A, then load from TableX if the condition is B then load from TableA and TableY. We must know from which table we need to get data. Once we know the table it is straightforward to query the column names from the data dictionary. But there is one more condition, which is that some columns need to be excluded, and these columns are different for each table. I am trying to solve the problem only for dynamically generating the list columns. But my manager told me to make solution on the conceptual level rather than just fixing. This is a very big system with providers and consumers constantly loading and consuming data. So he wanted solution that can be general. So what is the best way for storing condition, tablename, excluded columns? One way is storing in database. Are there any other ways? If yes what is the best? As I have to give at least a couple of ideas before finalizing. Thanks,

    Read the article

  • Heterogeneous queries require the ANSI_NULLS

    - by Dezigo
    I wrote a trigger. USE [TEST] GO /****** Object: Trigger [dbo].[TR_POSTGRESQL_UPDATE_YC] Script Date: 05/26/2010 08:54:03 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER TRIGGER [dbo].[TR_POSTGRESQL_UPDATE_YC] ON [dbo].[BCT_CNTR_EVENTS] FOR INSERT AS BEGIN DECLARE @MOVE_TIME varchar(14); DECLARE @MOVE_TIME_FORMATED varchar(20); DECLARE @RELEASE_NOTE varchar(32); DECLARE @CMR_NUMBER varchar(15); DECLARE @MOVE_TYPE varchar(2); SELECT @MOVE_TIME = inserted.move_time ,@MOVE_TYPE = inserted.move_type ,@RELEASE_NOTE = inserted.release_note ,@CMR_NUMBER = inserted.cmr_number FROM inserted IF(@MOVE_TYPE = 'YC') BEGIN SET @MOVE_TIME_FORMATED = SUBSTRING(@MOVE_TIME,1,4) + '-' + SUBSTRING(@MOVE_TIME,5,2) + '-' + SUBSTRING(@MOVE_TIME,7,2) + ' 00:00:00' --UPDATE OpenQuery(POSTGRESQL_SERV,'SELECT visit_cmr,visit_timestamp,visit_pin FROM VISIT') -- SET visit_cmr = @RELEASE_NOTE -- WHERE visit_timestamp = @MOVE_TIME_FORMATED -- AND visit_pin = right(@CMR_NUMBER,5) -- AND visit_cmr IS NULL END SET NOCOUNT ON; END When I have inserted a row,I have get an error **Heterogeneous queries require the ANSI_NULLS and ANSI_WARNINGS options to be set for the connection. This ensures consistent query semantics. Enable these options and then reissue your query.** Then I ofcourse SET SET ANSI_WARNINGS is ON but it`s not work for me. (trigger fo linked server postgresql) I have restarted a server. not work again.

    Read the article

  • FreeTDS runs out of memory from DBD::Sybase

    - by skiphoppy
    When I add client charset = UTF-8 to my freetds.conf file, my DBD::Sybase program emits: Out of memory! and terminates. This happens when I call execute() on an SQL query statement that returns any ntext fields. I can return numeric data, datetimes, and nvarchars just fine, but whenever one of the output fields is ntext, I get this error. All these queries work perfectly fine without the UTF-8 setting, but I do need to handle some characters that throw warnings under the default character set. (See related question.) The error message is not formatted the same way other DBD::Sybase error messages seem to be formatted. I do get a message that a rollback() is being issued, though. (My false AutoCommit flag is being honored.) I think I read somewhere that FreeTDS uses the iconv program to convert between character sets; is it possible that this message is being emitted from iconv? If I execute the same query with the same freetds.conf settings in tsql (FreeTDS's command-line SQL shell), I don't get the error. I'm connecting to SQL Server. What do I need to do to get these queries to return successfully?

    Read the article

  • php paging and the use of limit clause

    - by Average Joe
    Imagine you got a 1m record table and you want to limit the search results down to say 10,000 and not more than that. So what do I use for that? Well, the answer is use the limit clause. example select recid from mytable order by recid asc limit 10000 This is going to give me the last 10,000 records entered into this table. So far no paging. But the limit phrase is already in use. That brings to question to the next level. What if I want to page thru this record particular record set 100 recs at a time? Since the limit phrase is already part of the original query, how do I use it again, this time to take care of the paging? If the org. query did not have a limit clause to begin with, I'd adjust it as limit 0,100 and then adjusting it as 100,100 and then 200,100 and so on while the paging takes it course. But at this time, I cannot. You almost think you'd want to use two limit phrases one after the other - which is not not gonna work. limit 10000 limit 0,1000 for sure it would error out. So what's the solution in this particular case?

    Read the article

  • populate checkboxes with database.

    - by amby
    Hi, I have to poulate checkboxes with data coming from database but no checkbox is showing on my page. please give me correct way to do that. in C# file page_load method i m doing this: public partial class dbTest1 : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { string Server = "al2222"; string Username = "hshshshsh"; string Password = "sjjssjs"; string Database = "database1"; string ConnectionString = "Data Source=" + Server + ";"; ConnectionString += "User ID=" + Username + ";"; ConnectionString += "Password=" + Password + ";"; ConnectionString += "Initial Catalog=" + Database; string query = "Select * from Customer_Order where orderNumber = 17"; using (SqlConnection conn = new SqlConnection(ConnectionString)) { using (SqlCommand cmd = new SqlCommand(query, conn)) { conn.Open(); SqlDataReader dr = cmd.ExecuteReader(); while (dr.Read()) { if (!IsPostBack) { Interests.DataSource = dr; Interests.DataTextField = "OptionName"; Interests.DataValueField = "OptionName"; Interests.DataBind(); } } conn.Close(); conn.Dispose(); } } } } and in .aspx using this: <asp:CheckBoxList ID="Interests" runat="server"></asp:CheckBoxList> please tell me correct way to do that.

    Read the article

  • What is happening in this T-SQL code? (Concatenting the results of a SELECT statement)

    - by Ben McCormack
    I'm just starting to learn T-SQL and could use some help in understanding what's going on in a particular block of code. I modified some code in an answer I received in a previous question, and here is the code in question: DECLARE @column_list AS varchar(max) SELECT @column_list = COALESCE(@column_list, ',') + 'SUM(Case When Sku2=' + CONVERT(varchar, Sku2) + ' Then Quantity Else 0 End) As [' + CONVERT(varchar, Sku2) + ' - ' + Convert(varchar,Description) +'],' FROM OrderDetailDeliveryReview Inner Join InvMast on SKU2 = SKU and LocationTypeID=4 GROUP BY Sku2 , Description ORDER BY Sku2 Set @column_list = Left(@column_list,Len(@column_list)-1) Select @column_list ---------------------------------------- 1 row is returned: ,SUM(Case When Sku2=157 Then Quantity Else 0 End) As [157 -..., SUM(Case ... The T-SQL code does exactly what I want, which is to make a single result based on the results of a query, which will then be used in another query. However, I can't figure out how the SELECT @column_list =... statement is putting multiple values into a single string of characters by being inside a SELECT statement. Without the assignment to @column_list, the SELECT statement would simply return multiple rows. How is it that by having the variable within the SELECT statement that the results get "flattened" down into one value? How should I read this T-SQL to properly understand what's going on?

    Read the article

  • Adjust static value into dynamic (javascript) value possible in Sharepoint allitems.aspx page?

    - by lerac
    <SharePoint:SPDataSource runat="server" IncludeHidden="true" SelectCommand="&lt;View&gt;&lt;Query&gt;&lt;OrderBy&gt;&lt;FieldRef Name=&quot;EventDate&quot;/&gt;&lt;/OrderBy&gt;&lt;Where&gt;&lt;Contains&gt;&lt;FieldRef Name=&quot;lawyer_x0020_1&quot;/&gt;&lt;Value Type=&quot;Note&quot;&gt;F. Sanches&lt;/Value&gt;&lt;/Contains&gt;&lt;/Where&gt;&lt;/Query&gt;&lt;/View&gt;" id="datasource1" DataSourceMode="List" UseInternalName="true"><InsertParameters><asp:Parameter DefaultValue="{ANUMBER}" Name="ListID"></asp:Parameter> This codeline is just one line of the allitems.aspx of a sharepoint list item. It only displays items where lawyer 1 = F. Sanches. Before I start messing around with the .ASPX page I wonder if it possible to change F. Sanches (in the code) into a dynamical variable (from a javascript value or something else that can be used to place the javascript value in there dynamically). If I put any javascript code in the line it will not work. P.S. Ignore ANUMBER part in code. Let say to make it simple I have javascript variable like this (now static but with my other code it is dynamic). It would be an achievement if it would place a static javascript variable. <SCRIPT type=text/javascript>javaVAR = "P. Janssen";</script> If Yes -- how? If No -- Thank you!

    Read the article

  • Getting the ranking of a photo in SQL

    - by Jake Petroules
    I have the following tables: Photos [ PhotoID, CategoryID, ... ] PK [ PhotoID ] Categories [ CategoryID, ... ] PK [ CategoryID ] Votes [ PhotoID, UserID, ... ] PK [ PhotoID, UserID ] A photo belongs to one category. A category may contain many photos. A user may vote once on any photo. A photo can be voted for by many users. I want to select the ranks of a photo (by counting how many votes it has) both overall and within the scope of the category that photo belongs to. The count of SELECT * FROM Votes WHERE PhotoID = @PhotoID being the number of votes a photo has. I want the resulting table to have generated columns for overall rank, and rank within category, so that I may order the results by either. So for example, the resulting table from the query should look like: PhotoID VoteCount RankOverall RankInCategory 1 48 1 7 3 45 2 5 19 33 3 1 2 17 4 3 7 9 5 5 ... ...you get the idea. How can I achieve this? So far I've got the following query to retrieve the vote counts, but I need to generate the ranks as well: SELECT PhotoID, UserID, CategoryID, DateUploaded, (SELECT COUNT(CommentID) AS Expr1 FROM dbo.Comments WHERE (PhotoID = dbo.Photos.PhotoID)) AS CommentCount, (SELECT COUNT(PhotoID) AS Expr1 FROM dbo.PhotoVotes WHERE (PhotoID = dbo.Photos.PhotoID)) AS VoteCount, Comments FROM dbo.Photos

    Read the article

  • Why are my USB 2.0 devices crashing Windows XP?

    - by BenAlabaster
    Background on the machine I'm having a problem with: The machine was inherited and appears to be circa 2003 (there's a date stamp on the power supply which leads me to this conclusion). I've got it set up as a Skype terminal for my 2 year old to keep in touch with her grandparents and other members of the family - which everyone loves. It has a generic ATX motherboard with no identifying markings other than one stamp that says "Rev.B". CPU-Z identifies the motherboard model as VT8601 but doesn't provide me with any manufacturer name. On board it has 1 x 10/100 LAN, 2 x USB 1.0, VGA, PS/2 for KB and mouse, parallel port, 2 x serial ports, 2 x IDE, 1 x floppy, 2 x SDRAM slots, 1 x CPU housing that is seating a 1.3GHz Intel Celeron CPU, 3 x PCI, 1 x AGP - although you can only use 2 of the PCI slots if you use the AGP slot due to the physical layout of the board. It's got 768Mb PC133 SDRAM - 1 x 512Mb & 1 x 256Mb installed as well as a D-LINK WDA-2320 54G Wi-Fi network card and a generic USB 2.0 expansion board containing 3 x external + 1 x internal USB connectors. It has a DVD+/-RW running as master on IDE1 and a 1.44Mb 3.5" floppy drive connected to the floppy connector. It has an 80Gb Western Digital hard drive running as master on IDE0. All this is sitting in a slimline case. I don't know the wattage of the PSU, but can post this later if this proves to be helpful. The motherboard is running a version of Award BIOS for which I don't have the version number to hand but can again post this later if it would be helpful. The hard disk is freshly formatted and built with Windows XP Professional/Service Pack 3 and is up to date with all current patches. In addition to Windows XP, the only other software it's running is Skype 4.1 (4.2 hangs the whole machine as soon as it starts up, requiring a hard boot to recover). It's got a Daytek MV150 15" touch screen hooked up to the on board VGA and COM1 sockets with the most current drivers from the Daytek website and the most current version of ELO-Touchsystems drivers for the touch component. The webcam is a Logitech Webcam C200 with the latest drivers from the Logitech website. The problem: If I hook any devices to the USB 2.0 sockets, it hangs the whole machine and I have to hard boot it to get it back up. If I have any devices attached to the USB 2.0 sockets when I boot up, it hangs before Windows gets to the login prompt and I have to hard boot it to recover. Workarounds found: I can plug the same devices into the on board USB 1.0 sockets and everything works fine, albeit at reduced performance. I've tried 3 different kinds of USB thumb drives, 3 different makes/models of webcams and my iPhone all with the same effect. They're recognized and don't hang the machine when I hook them to the USB 1.0 but if I hook them to the USB 2.0 ports, the machine hangs within a couple of seconds of recognizing the devices were connected. Attempted solutions: I've seen suggestions that this could be a power problem - that the PSU just doesn't have the wattage to drive these ports. While I'm doubtful this is the problem [after all the motherboard has the same standard connector regardless of the PSU wattage], I tried disabling all the on board devices that I'm not using - on board LAN, the second COM port, the AGP connector etc. through the BIOS in what I'm sure is a futile attempt to reduce the power consumption... I also modified the ACPI and power management settings. It didn't have any noticeable affect, although it didn't do any harm either. Could the wattage of the PSU really cause this problem? If it can, is there anything I need to be aware of when replacing it or do I just need to make sure it's got a higher wattage than the current one? My interpretation was that the wattage only affected the number of drives you could hook up to the power connectors, is that right? I've installed the USB card in another machine and it works without issue, so it's not a problem with the USB card itself, and Windows says the card is installed and working correctly... right up until I connect a device to it. The only thing I haven't done which I only just thought of while writing this essay is trying the USB 2.0 card in a different PCI slot, or re-ordering the wi-fi and USB cards in the slots... although I'm not sure if this will make any difference - does anyone have any experience that would suggest this might work? Other thoughts/questions: Perhaps this is an incompatibility between the USB 2.0 card and the BIOS, would re-flashing the BIOS with a newer version help? Do I need to be able to identify the manufacturer of the motherboard in order to be able to find a BIOS edition specific for this motherboard or will any version of Award BIOS function in its place? Question: Does anyone have any ideas that could help me get my USB 2.0 devices hooked up to this machine?

    Read the article

  • XP can't read data from transferred HD

    - by Alexander Miller
    Computer A, running XP, died. XP was installed on a fresh HD in computer B. Slaved data-backup HD from A was installed as slave on B. B will not read it; shows only 2 folders, Recycler and System Volume Info. All of these are older machines with IDE drives. What's going on & how can I read/transfer the data from the transferred drive? This was only a trial run. Actually I will need to transfer the master HD from A - which has XP on it - and read from its data partition because (blush) the backup drive was not up to date.

    Read the article

  • How to stop nginx on Mac OS 10.6.3

    - by Alex Kaushovik
    I've installed nginx server on my Mac from MacPorts: sudo port install nginx. Then I followed the recommendation from the port installation console and created the launchd startup item for nginx, then started the server. It works fine (after I renamed nginx.conf.example to nginx.conf and renamed mime.types.example to mime.types), but I couldn't stop it... I tried sudo nginx -s stop - this doesn't stop the server, I can still see "Welcome to nginx!" page in my browser on http://localhost, also I still see master and worker processes of nginx with ps -e | grep nginx. What is the best way to start/stop nginx on Mac? BTW, I've added "daemon off;" into nginx.conf - as recommended by various resources. Thank you.

    Read the article

  • Beginner Question: For extract a large subset of a table from MySQL, how does Indexing, order of tab

    - by chongman
    Sorry if this is too simple, but thanks in advance for helping. This is for MySQL but might be relevant for other RDMBSs tblA has 4 columns: colA, colB, colC, mydata, A_id It has about 10^9 records, with 10^3 distinct values for colA, colB, colC. tblB has 3 columns: colA, colB, B_id It has about 10^4 records. I want all the records from tblA (except the A_id) that have a match in tblB. In other words, I want to use tblB to describe the subset that I want to extract and then extract those records from tblA. Namely: SELECT a.colA, a.colB, a.colC, a.mydata FROM tblA as a INNER JOIN tblB as b ON a.colA=b.colA a.colB=b.colB ; It's taking a really long time (more than an hour) on a newish computer (4GB, Core2Quad, ubuntu), and I just want to check my understanding of the following optimization steps. ** Suppose this is the only query I will ever run on these tables. So ignore the need to run other queries. Now my questions: 1) What indexes should I create to optimize this query? I think I just need a multiple index on (colA, colB) for both tables. I don't think I need separate indexes for colA and colB. Another stack overflow article (that I can't find) mentioned that when adding new indexes, it is slower when there are existing indexes, so that might be a reason to use the multiple index. 2) Is INNER JOIN correct? I just want results where a match is found. 3) Is it faster if I join (tblA to tblB) or the other way around, (tblB to tblA)? This previous answer says that the optimizer should take care of that. 4) Does the order of the part after ON matter? This previous answer say that the optimizer also takes care of the execution order.

    Read the article

  • How to stop nginx on Mac OS X

    - by Alex Kaushovik
    I've installed nginx server on my Mac from MacPorts: sudo port install nginx. Then I followed the recommendation from the port installation console and created the launchd startup item for nginx, then started the server. It works fine (after I renamed nginx.conf.example to nginx.conf and renamed mime.types.example to mime.types), but I couldn't stop it... I tried sudo nginx -s stop - this doesn't stop the server, I can still see "Welcome to nginx!" page in my browser on http://localhost/, also I still see master and worker processes of nginx with ps -e | grep nginx. What is the best way to start/stop nginx on Mac? BTW, I've added "daemon off;" into nginx.conf - as recommended by various resources. Thank you.

    Read the article

  • How to handle image/gif type response on client side using GWT

    - by user200340
    Hi all, I have a question about how to handle image/gif type response on client side, any suggestion will be great. There is a service which responds for retrieving image (only one each time at the moment) from database. The code is something like, JDBC Connection Construct MYSQL query. Execute query If has ResultSet, retrieve first one { //save image into Blob image, “img” is the only entity in the image table. image = rs.getBlob("img"); } response.setContentType("image/gif"); //set response type InputStream in = image.getBinaryStream(); //output Blob image to InputStream int bufferSize = 1024; //buffer size byte[] buffer = new byte[bufferSize]; //initial buffer int length =0; //read length data from inputstream and store into buffer while ((length = in.read(buffer)) != -1) { out.write(buffer, 0, length); //write into ServletOutputStream } in.close(); out.flush(); //write out The code on client side .... imgform.setAction(GWT.getModuleBaseURL() + "serviceexample/ImgRetrieve"); .... ClickListener { OnClick, then imgform.submit(); } formHandler { onSubmit, form validation onSubmitComplete ??????? //handle response, and display image **Here is my question, i had tried Image img = new Image(GWT.getHostPageBaseURL() +"serviceexample/ImgRetrieve"); mg.setSize("300", "300"); imgpanel.add(img); but i only got a non-displayed image with 300X300 size.** } So, how should i handle the responde in this case? Thanks,

    Read the article

  • Excel: What formula combines this data into one COUNT amount?

    - by Mike
    I have 30 colleagues who are answering questions over 3 time periods. Each has their own Excel workbook with the questions, and over the year they update it. I collate their worksheets into one master worksheet, but now need to combine their answers into a simple table. The questions, the time periods and then a COUNT of how many answered it. For example: I need a table that shows me how many people (not the persons name at this point) answered question 10 in time period 2. I can't use a database before someone mentions it ;). Many thanks Mike.

    Read the article

< Previous Page | 496 497 498 499 500 501 502 503 504 505 506 507  | Next Page >