Search Results

Search found 13182 results on 528 pages for 'ad group'.

Page 389/528 | < Previous Page | 385 386 387 388 389 390 391 392 393 394 395 396  | Next Page >

  • LEFT OUTER JOIN in NHibernate with SQL semantics

    - by Yuval
    Hi, Is it possible to use HQL/ICritera to produce a query with the same semantics as the following SQL query: select table1.A, table2.B, count(*) from table1 left join (select table2.parent_id, table2.B from table2 where table2.C = 'Some value') as table2 on table2.parent_id = table1.id group by table1.A, table2.B order by table1.A In particular, what I'd like is to receive rows (that is, objects) from table1 that have no matching rows in table2. However, I only get the rows from table1 that have matches in table2. Is this the meaning of 'LEFT JOIN' in HQL? And if so, how can I get it to join on a subquery? Tnx.

    Read the article

  • Java RegEx Pattern not matching (works in .NET)

    - by CDSO1
    Below is an example that is producing an exception in Java (and not matching the input). Perhaps I misunderstood the JavaDoc, but it looks like it should work. Using the same pattern and input in C# will produce a match. import java.util.regex.Matcher; import java.util.regex.Pattern; public class Main { public static void main(String[] args) { String pattern = "aspx\\?uid=([^']*)"; Pattern p = Pattern.compile(pattern); Matcher m = p.matcher("id='page.aspx?uid=123'"); System.out.println(m.groupCount() > 0 ? m.group(1) : "No Matches"); } }

    Read the article

  • MySQL Select problems

    - by John Nuñez
    Table #1: qa_returns_items Table #2: qa_returns_residues I have a long time trying to get this Result: item_code - item_quantity 2 - 1 3 - 2 IF qa_returns_items.item_code = qa_returns_residues.item_code AND status_code = 11 THEN item_quantity = qa_returns_items.item_quantity - qa_returns_residues.item_quantity ELSEIF qa_returns_items.item_code = qa_returns_residues.item_code AND status_code = 12 THEN item_quantity = qa_returns_items.item_quantity + qa_returns_residues.item_quantity ELSE show diferendes END IF I tried this Query: select SubQueryAlias.item_code, total_ecuation, SubQueryAlias.item_unitprice, SubQueryAlias.item_unitprice * total_ecuation as item_subtotal, item_discount, (SubQueryAlias.item_unitprice * total_ecuation) - item_discount as item_total from ( select ri.item_code , case status_code when 11 then ri.item_quantity - rr.item_quantity when 12 then ri.item_quantity + rr.item_quantity end as total_ecuation , rr.item_unitprice , rr.item_quantity , rr.item_discount * rr.item_quantity as item_discount from qa_returns_residues rr left join qa_returns_items ri on ri.item_code = rr.item_code WHERE ri.returnlog_code = 1 ) as SubQueryAlias where total_ecuation > 0 GROUP BY (item_code); The query returns this result: item_code - item_quantity 1 - 2 2 - 2

    Read the article

  • Enforcing a query in MySql to use a specific index

    - by Hossein
    Hi, I have large table. consisting of only 3 columns (id(INT),bookmarkID(INT),tagID(INT)).I have two BTREE indexes one for each bookmarkID and tagID columns.This table has about 21 Million records. I am trying to run this query: SELECT bookmarkID,COUNT(bookmarkID) AS count FROM bookmark_tag_map GROUP BY tagID,bookmarkID HAVING tagID IN (-----"tagIDList"-----) AND count >= N which takes ages to return the results.I read somewhere that if make an index in which it has tagID,bookmarkID together, i will get a much faster result. I created the index after some time. Tried the query again, but it seems that this query is not using the new index that I have made.I ran EXPLAIN and saw that it is actually true. My question now is that how I can enforce a query to use a specific index? also comments on other ways to make the query faster are welcome. Thanks

    Read the article

  • What the heck is goin' on with the column Width or Why I do hate rdlc designer in VS...

    - by plotnick
    I can't understand... I put a column into a Tablix in .rdlc designer of VS2010 and defined column's width and even said that it cannot grow. And in the reportViewer when you run app. it gets grown again. Damn it. I replaced every single tag in the file to False - nothing happened, it still takes the width of a prior column. Interestingly some columns and rows that I put yesterday don't grow. I just wanted to separate group columns and 'Total' section with thin empty column, but it gets huge and ugly and spoils everything... damn that thing! Why the rdlc designer so damn stupid? Why sometimes it doesn't allow me to merge and split cells? Is there any better editor for .rdlc files?

    Read the article

  • ListChildren SQL Reporting Service

    - by markpirvine
    Hi, I'm trying to get a list of available reports via the webservice for SQL Reporting Services 2005 express edition. Each time I try to call the ListChildren method I get an insuffient permission exception. The code is: ReportingService2005SoapClient rService = new ReportingService2005SoapClient(); CatalogItem[] cItems = null; rService.ListChildren("/", false, out cItems); I added the ASP.net (iusr) account into the local admin group on the PC and still get the exception. Is this method supported in the express edition? Mark

    Read the article

  • How can I hide tracing code in Visual Studio IDE C# ?

    - by Mark
    As I'm starting to put more tracing in my code, i'm realizing it adds a lot of clutter. I know Visual Studio allows you to hide and reveal code, however, i'd like to be able group code into "tracing" code and then hide it and reveal at will as i'm reading the code. I suppose it could do this either per file or per class or per function. Is there any way to do this? What do you guys do? Adding some clarification The hide current feature kind of allows you to do this except that when the code is hidden, you can't tell if its tracing or not. You also can't say "hide all tracing code" and "reveal all tracing code" which is useful when reading a function depending on what you are trying to do.

    Read the article

  • SQL-query task, decision?

    - by Sirius Lampochkin
    There is a table of currencies rates in MS SQL Server 2005: ID | CURR | RATE | DATE 1   | USD   | 30      | 01.10.2010 3   | GBP   | 45      | 07.10.2010 5   | USD   | 31      | 08.10.2010 7   | GBP   | 46      | 09.10.2010 9   | USD   | 32      | 12.10.2010 11 | GBP   | 48      | 03.10.2010 Rate are updated in real time and there are more than 1 billion rows in the table. It needs to write a SQL-query, wich will provide latest rates per each currency. My decision is: SELECT c.[id],c.[curr],c.[rate],c.[date] FROM [curr_rate] c, (SELECT curr, MAX(date) AS rate_date FROM [curr_rate] GROUP BY curr) t WHERE c.date = t.rate_date AND c.curr = t.curr ORDER BY c.[curr] ASC Is it possible to write a query without sub-queries and join's with derived tables?

    Read the article

  • Reporting Services is displaying extra rows for minimised row groups

    - by Graphain
    Hi, I have a fairly basic SQL Server Reporting Services report that is using nested row groups. Each sub-group depends on expanding its parent to be visible which is all pretty standard. The layout is something like this: { Company { { Car SUM(Price) { { { Part Price My desired result when expanded is something like this (which I get fine): - SuperCarCompany - SuperCar 20 Door 20 - SuperCar2 70 Door 30 Window 40 - OtherCarCompany - SuperCar2 50 /* Same SuperCar2 */ Door 50 - MoreCarCompany - BestCarEver 535 Engine 500 Door 30 Window 5 And when opened initially something like this: + SuperCarCompany + OtherCarCompany + MoreCarCompany However, I'm getting this: + SuperCarCompany + SuperCar2 70 (i.e. sum of all SuperCar2) + OtherCarCompany + SuperCar 20 + MoreCarCompany + BestCarEver 535 and I can even expand these superfluous rows like this: + SuperCarCompany - SuperCar2 70 (i.e. sum of all SuperCar2) Door 30 (i.e. first child of any SuperCar2) The superflous rows dissapear immediately when I expand the expected row above it (i.e. I'd need to expand all expected rows to get rid of all superflous rows). Any idea on the cause?

    Read the article

  • Converting raw bytes into audio sound

    - by Afro Genius
    In my application I inherit a javastreamingaudio class from the freeTTS package then bypass the write method which sends an array of bytes to the SourceDataLine for audio processing. Instead of writing to the data line, I write this and subsequent byte arrays into a buffer which I then bring into my class and try to process into sound. My application processes sound as arrays of floats so I convert to float and try to process but always get static sound back. I am sure this is the way to go but am missing something along the way. I know that sound is processed as frames and each frame is a group of bytes so in my application I have to process the bytes into frames somehow. Am I looking at this the right way? Thanx in advance for any help.

    Read the article

  • Different ways to query this search in SQL?

    - by Bart Terrell
    I am teaching myself MS-SQL and I am trying to find different ways to find the Count of Paid and Unpaid Claims for 2012 grouped by Region from these 3 tables. If there is a returned date, the claim is unpaid if the returned date is null then the claim is paid. I will attach the code I have ran, but I am not sure if there are better ways to do it. Thanks. Here is the code: SET dateformat ymd; CREATE TABLE Claims ( ClaimID INT, SubID INT, [Claim Date] DATETIME ); CREATE TABLE Phoneship ( ClaimID INT, [Shipping Number] INT, [Claim Date] DATETIME, [Ship Date] DATETIME, [Returned Date] DATETIME ); CREATE TABLE Enrollment ( SubID INT, Enrollment_Date DATETIME, Channel NVARCHAR(255), Region NVARCHAR(255), Status FLOAT, Drop_Date DATETIME ); INSERT INTO [Phoneship] ([ClaimID], [Shipping Number], [Claim Date], [Ship Date], [Returned Date]) VALUES (102, 201, '2011-10-13 00:00:00', '2011-10-14 00:00:00', NULL); INSERT INTO [Phoneship] ([ClaimID], [Shipping Number], [Claim Date], [Ship Date], [Returned Date]) VALUES (103, 202, '2011-11-02 00:00:00', '2011-11-03 00:00:00', '2011-11-20 00:00:00'); INSERT INTO [Phoneship] ([ClaimID], [Shipping Number], [Claim Date], [Ship Date], [Returned Date]) VALUES (103, 203, '2011-11-02 00:00:00', '2011-11-22 00:00:00', NULL); INSERT INTO [Phoneship] ([ClaimID], [Shipping Number], [Claim Date], [Ship Date], [Returned Date]) VALUES (105, 204, '2012-01-16 00:00:00', '2012-01-17 00:00:00', NULL); INSERT INTO [Phoneship] ([ClaimID], [Shipping Number], [Claim Date], [Ship Date], [Returned Date]) VALUES (106, 205, '2012-02-15 00:00:00', '2012-02-16 00:00:00', '2012-02-26 00:00:00'); INSERT INTO [Phoneship] ([ClaimID], [Shipping Number], [Claim Date], [Ship Date], [Returned Date]) VALUES (106, 206, '2012-02-15 00:00:00', '2012-02-27 00:00:00', '2012-03-06 00:00:00'); INSERT INTO [Phoneship] ([ClaimID], [Shipping Number], [Claim Date], [Ship Date], [Returned Date]) VALUES (107, 207, '2012-03-12 00:00:00', '2012-03-13 00:00:00', NULL); INSERT INTO [Phoneship] ([ClaimID], [Shipping Number], [Claim Date], [Ship Date], [Returned Date]) VALUES (108, 208, '2012-05-11 00:00:00', '2012-05-12 00:00:00', NULL); INSERT INTO [Phoneship] ([ClaimID], [Shipping Number], [Claim Date], [Ship Date], [Returned Date]) VALUES (109, 209, '2012-05-13 00:00:00', '2012-05-14 00:00:00', '2012-05-28 00:00:00'); INSERT INTO [Phoneship] ([ClaimID], [Shipping Number], [Claim Date], [Ship Date], [Returned Date]) VALUES (109, 210, '2012-05-13 00:00:00', '2012-05-30 00:00:00', NULL); INSERT INTO [Claims] ([ClaimID], [SubID], [Claim Date]) VALUES (101, 12345678, '2011-03-06 00:00:00'); INSERT INTO [Claims] ([ClaimID], [SubID], [Claim Date]) VALUES (102, 12347190, '2011-10-13 00:00:00'); INSERT INTO [Claims] ([ClaimID], [SubID], [Claim Date]) VALUES (103, 12348723, '2011-11-02 00:00:00'); INSERT INTO [Claims] ([ClaimID], [SubID], [Claim Date]) VALUES (104, 12349745, '2011-11-09 00:00:00'); INSERT INTO [Claims] ([ClaimID], [SubID], [Claim Date]) VALUES (105, 12347190, '2012-01-16 00:00:00'); INSERT INTO [Claims] ([ClaimID], [SubID], [Claim Date]) VALUES (106, 12349234, '2012-02-15 00:00:00'); INSERT INTO [Claims] ([ClaimID], [SubID], [Claim Date]) VALUES (107, 12350767, '2012-03-12 00:00:00'); INSERT INTO [Claims] ([ClaimID], [SubID], [Claim Date]) VALUES (108, 12350256, '2012-05-11 00:00:00'); INSERT INTO [Claims] ([ClaimID], [SubID], [Claim Date]) VALUES (109, 12347701, '2012-05-13 00:00:00'); INSERT INTO [Claims] ([ClaimID], [SubID], [Claim Date]) VALUES (110, 12350256, '2012-05-15 00:00:00'); INSERT INTO [Claims] ([ClaimID], [SubID], [Claim Date]) VALUES (111, 12350767, '2012-06-30 00:00:00'); INSERT INTO [Enrollment] ([SubID], [Enrollment_Date], [Channel], [Region], [Status], [Drop_Date]) VALUES (12345678, '2011-01-05 00:00:00', 'Retail', 'Southeast', 1, NULL); INSERT INTO [Enrollment] ([SubID], [Enrollment_Date], [Channel], [Region], [Status], [Drop_Date]) VALUES (12346178, '2011-03-13 00:00:00', 'Indirect Dealers', 'West', 1, NULL); INSERT INTO [Enrollment] ([SubID], [Enrollment_Date], [Channel], [Region], [Status], [Drop_Date]) VALUES (12346679, '2011-05-19 00:00:00', 'Indirect Dealers', 'Southeast', 0, '2012-03-15 00:00:00'); INSERT INTO [Enrollment] ([SubID], [Enrollment_Date], [Channel], [Region], [Status], [Drop_Date]) VALUES (12347190, '2011-07-25 00:00:00', 'Retail', 'Northeast', 0, '2012-05-21 00:00:00'); INSERT INTO [Enrollment] ([SubID], [Enrollment_Date], [Channel], [Region], [Status], [Drop_Date]) VALUES (12347701, '2011-08-14 00:00:00', 'Indirect Dealers', 'West', 1, NULL); INSERT INTO [Enrollment] ([SubID], [Enrollment_Date], [Channel], [Region], [Status], [Drop_Date]) VALUES (12348212, '2011-09-30 00:00:00', 'Retail', 'West', 1, NULL); INSERT INTO [Enrollment] ([SubID], [Enrollment_Date], [Channel], [Region], [Status], [Drop_Date]) VALUES (12348723, '2011-10-20 00:00:00', 'Retail', 'Southeast', 1, NULL); INSERT INTO [Enrollment] ([SubID], [Enrollment_Date], [Channel], [Region], [Status], [Drop_Date]) VALUES (12349234, '2012-01-06 00:00:00', 'Indirect Dealers', 'West', 0, '2012-02-14 00:00:00'); INSERT INTO [Enrollment] ([SubID], [Enrollment_Date], [Channel], [Region], [Status], [Drop_Date]) VALUES (12349745, '2012-01-26 00:00:00', 'Retail', 'Northeast', 0, '2012-04-15 00:00:00'); INSERT INTO [Enrollment] ([SubID], [Enrollment_Date], [Channel], [Region], [Status], [Drop_Date]) VALUES (12350256, '2012-02-11 00:00:00', 'Retail', 'Southeast', 1, NULL); INSERT INTO [Enrollment] ([SubID], [Enrollment_Date], [Channel], [Region], [Status], [Drop_Date]) VALUES (12350767, '2012-03-02 00:00:00', 'Indirect Dealers', 'West', 1, NULL); INSERT INTO [Enrollment] ([SubID], [Enrollment_Date], [Channel], [Region], [Status], [Drop_Date]) VALUES (12351278, '2012-04-18 00:00:00', 'Retail', 'Midwest', 1, NULL); INSERT INTO [Enrollment] ([SubID], [Enrollment_Date], [Channel], [Region], [Status], [Drop_Date]) VALUES (12351789, '2012-05-08 00:00:00', 'Indirect Dealers', 'West', 0, '2012-07-04 00:00:00'); INSERT INTO [Enrollment] ([SubID], [Enrollment_Date], [Channel], [Region], [Status], [Drop_Date]) VALUES (12352300, '2012-06-24 00:00:00', 'Retail', 'Midwest', 1, NULL); INSERT INTO [Enrollment] ([SubID], [Enrollment_Date], [Channel], [Region], [Status], [Drop_Date]) VALUES (12352811, '2012-06-25 00:00:00', 'Retail', 'Southeast', 1, NULL); And Query1 SELECT Count(ClaimID) AS 'Paid Claim', (SELECT Count(ClaimID) FROM dbo.phoneship WHERE [returned date] IS NOT NULL) AS 'Unpaid Claim' FROM dbo.Phoneship WHERE [Returned Date] IS NULL GROUP BY claimid Query2 SELECT Count(*) AS 'Paid Claims', (SELECT Count(*) FROM dbo.Phoneship WHERE [Returned Date] IS NOT NULL) AS 'Unpaid Claims' FROM dbo.Phoneship WHERE [Returned Date] IS NULL; Query3 Select Distinct(C.[Shipping Number]), Count(C.ClaimID) AS 'COUNT ClaimID', A.Region, A.SubID From dbo.HSEnrollment A Inner Join dbo.Claims B On A.SubId = B.SubId Inner Join dbo.Phoneship C On B.ClaimID = C.ClaimID Where C.[Returned Date] IS NULL Group By A.Region, A.Subid, C.ClaimID, C.[Shipping Number] Order By A.Region

    Read the article

  • What to filter when providing very limited open WiFi to a small conference or meeting?

    - by Tim Farley
    Executive Summary The basic question is: if you have a very limited bandwidth WiFi to provide Internet for a small meeting of only a day or two, how do you set the filters on the router to avoid one or two users monopolizing all the available bandwidth? For folks who don't have the time to read the details below, I am NOT looking for any of these answers: Secure the router and only let a few trusted people use it Tell everyone to turn off unused services & generally police themselves Monitor the traffic with a sniffer and add filters as needed I am aware of all of that. None are appropriate for reasons that will become clear. ALSO NOTE: There is already a question concerning providing adequate WiFi at large (500 attendees) conferences here. This question concerns SMALL meetings of less than 200 people, typically with less than half that using the WiFi. Something that can be handled with a single home or small office router. Background I've used a 3G/4G router device to provide WiFi to small meetings in the past with some success. By small I mean single-room conferences or meetings on the order of a barcamp or Skepticamp or user group meeting. These meetings sometimes have technical attendees there, but not exclusively. Usually less than half to a third of the attendees will actually use the WiFi. Maximum meeting size I'm talking about is 100 to 200 people. I typically use a Cradlepoint MBR-1000 but many other devices exist, especially all-in-one units supplied by 3G and/or 4G vendors like Verizon, Sprint and Clear. These devices take a 3G or 4G internet connection and fan it out to multiple users using WiFi. One key aspect of providing net access this way is the limited bandwidth available over 3G/4G. Even with something like the Cradlepoint which can load-balance multiple radios, you are only going to achieve a few megabits of download speed and maybe a megabit or so of upload speed. That's a best case scenario. Often it is considerably slower. The goal in most of these meeting situations is to allow folks access to services like email, web, social media, chat services and so on. This is so they can live-blog or live-tweet the proceedings, or simply chat online or otherwise stay in touch (with both attendees and non-attendees) while the meeting proceeds. I would like to limit the services provided by the router to just those services that meet those needs. Problems In particular I have noticed a couple of scenarios where particular users end up abusing most of the bandwidth on the router, to the detriment of everyone. These boil into two areas: Intentional use. Folks looking at YouTube videos, downloading podcasts to their iPod, and otherwise using the bandwidth for things that really aren't appropriate in a meeting room where you should be paying attention to the speaker and/or interacting.At one meeting that we were live-streaming (over a separate, dedicated connection) via UStream, I noticed several folks in the room that had the UStream page up so they could interact with the meeting chat - apparently oblivious that they were wasting bandwidth streaming back video of something that was taking place right in front of them. Unintentional use. There are a variety of software utilities that will make extensive use of bandwidth in the background, that folks often have installed on their laptops and smartphones, perhaps without realizing.Examples: Peer to peer downloading programs such as Bittorrent that run in the background Automatic software update services. These are legion, as every major software vendor has their own, so one can easily have Microsoft, Apple, Mozilla, Adobe, Google and others all trying to download updates in the background. Security software that downloads new signatures such as anti-virus, anti-malware, etc. Backup software and other software that "syncs" in the background to cloud services. For some numbers on how much network bandwidth gets sucked up by these non-web, non-email type services, check out this recent Wired article. Apparently web, email and chat all together are less than one quarter of the Internet traffic now. If the numbers in that article are correct, by filtering out all the other stuff I should be able to increase the usefulness of the WiFi four-fold. Now, in some situations I've been able to control access using security on the router to limit it to a very small group of people (typically the organizers of the meeting). But that's not always appropriate. At an upcoming meeting I would like to run the WiFi without security and let anyone use it, because it happens at the meeting location the 4G coverage in my town is particularly excellent. In a recent test I got 10 Megabits down at the meeting site. The "tell people to police themselves" solution mentioned at top is not appropriate because of (a) a largely non-technical audience and (b) the unintentional nature of much of the usage as described above. The "run a sniffer and filter as needed" solution is not useful because these meetings typically only last a couple of days, often only one day, and have a very small volunteer staff. I don't have a person to dedicate to network monitoring, and by the time we got the rules tweaked completely the meeting will be over. What I've Got First thing, I figured I would use OpenDNS's domain filtering rules to filter out whole classes of sites. A number of video and peer-to-peer sites can be wiped out using this. (Yes, I am aware that filtering via DNS technically leaves the services accessible - remember, these are largely non-technical users attending a 2 day meeting. It's enough). I figured I would start with these selections in OpenDNS's UI: I figure I will probably also block DNS (port 53) to anything other than the router itself, so that folks can't bypass my DNS configuration. A savvy user could get around this, because I'm not going to put a lot of elaborate filters on the firewall, but I don't care too much. Because these meetings don't last very long, its probably not going to be worth the trouble. This should cover the bulk of the non-web traffic, i.e. peer-to-peer and video if that Wired article is correct. Please advise if you think there are severe limitations to the OpenDNS approach. What I Need Note that OpenDNS focuses on things that are "objectionable" in some context or another. Video, music, radio and peer-to-peer all get covered. I still need to cover a number of perfectly reasonable things that we just want to block because they aren't needed in a meeting. Most of these are utilities that upload or download legit things in the background. Specifically, I'd like to know port numbers or DNS names to filter in order to effectively disable the following services: Microsoft automatic updates Apple automatic updates Adobe automatic updates Google automatic updates Other major software update services Major virus/malware/security signature updates Major background backup services Other services that run in the background and can eat lots of bandwidth I also would like any other suggestions you might have that would be applicable. Sorry to be so verbose, but I find it helps to be very, very clear on questions of this nature, and I already have half a solution with the OpenDNS thing.

    Read the article

  • split a string based on pattern in java - capital letters and numbers

    - by rookie
    Hi all I have the following string "3/4Ton". I want to split it as -- word[1] = 3/4 and word[2] = Ton. Right now my piece of code looks like this:- Pattern p = Pattern.compile("[A-Z]{1}[a-z]+"); Matcher m = p.matcher(line); while(m.find()){ System.out.println("The word --> "+m.group()); } It carries out the needed task of splitting the string based on capital letters like:- String = MachineryInput word[1] = Machinery , word[2] = Input The only problem is it does not preserve, numbers or abbreviations or sequences of capital letters which are not meant to be separate words. Could some one help me out with my regular expression coding problem. Thanks in advance...

    Read the article

  • Select from multiple tables, remove duplicates

    - by staze
    I have two tables in a SQLite DB, and both have the following fields: idnumber, firstname, middlename, lastname, email, login One table has all of these populated, the other doesn't have the idnumber, or middle name populated. I'd LIKE to be able to do something like: select idnumber, firstname, middlename, lastname, email, login from users1,users2 group by login; But I get an "ambiguous" error. Doing something like: select idnumber, firstname, middlename, lastname, email, login from users1 union select idnumber, firstname, middlename, lastname, email, login from users2; LOOKS like it works, but I see duplicates. my understanding is that union shouldn't allow duplicates, but maybe they're not real duplicates since the second user table doesn't have all the fields populated (e.g. "20, bob, alan, smith, [email protected], bob" is not the same as "NULL, bob, NULL, smith, [email protected], bob"). Any ideas? What am I missing? All I want to do is dedupe based on "login". Thanks!

    Read the article

  • Linq Grouping by 2 keys as a one

    - by Evgeny
    Hello! I write a simple OLAP viewer for my web-site. Here are the classes (abstract example): Employee { ID; Name; Roles[]; //What Employee can do } Order { Price; Employee Manager; Employee Executive; //Maybe wrong english. The person which perform order } Employee can be Manager and Executive in the order at the same time. This means that Employee role is not fixed. I have to group orders by employees and finally get IGrouping with Employee key. So .GroupBy(el=new {el.Manager,el.Executive}) is not allowed. I considered some tricks with IEqualityComparable, but found no solution. If somrbody will help I'll be vary glad, thank you.

    Read the article

  • How to Access Data in ZODB

    - by Eric
    I have a Plone site that has a lot of data in it and I would like to query the database for usage statistics; ie How many cals with more than 1 entries, how many blogs per group with entries after a given date, etc. I want to run the script from the command line... something like so: bin/instance [script name] I've been googling for a while now but can't find out how to do this. Also, can anybody provide some help on how to get user specific information. Information like, last logged in, items created. Thanks! Eric

    Read the article

  • SQL Server version of MySQL's group_concat and escaped strings

    - by TheObserver
    I only have the Express versions of MS SQL Server 2008 and Visual Studio. Given that I can't create a SQL Server project and therefore CLR solutions are out of the question, I've attempted to use select col1, stuff( ( select ' ' + col2 from StrConcat t1 where t2.col1 = t1.col1 for xml path('') ),1,1,'') from StrConcat t2 group by col1 order by col1 to get a row concatenated col2. col2 is a varchar field with some control characters like & and \n. When it is concatenated with the above SQL, it appears to escape those control characters ie. & becomes & amp ; and \n becomes &#xOD, which is not what I want it to do. So, the question is, what black box magic is causing that to happen?

    Read the article

  • Changing text size on a ggplot bump plot

    - by Tom Liptrot
    Hi, I'm fairly new to ggplot. I have made a bumpplot using code posted below. I got the code from someones blog - i've lost the link.... I want to be able to increase the size of the lables (here letters which care very small on the left and right of the plot) without affecting the width of the lines (this will only really make sense after you have run the code) I have tried changing the size paramater but that always alter the line width as well. Any suggestion appreciated. Tom require(ggplot2) df<-matrix(rnorm(520), 5, 10) #makes a random example colnames(df) <- letters[1:10] Price.Rank<-t(apply(df, 1, rank)) dfm<-melt(Price.Rank) names(dfm)<-c( "Date","Brand", "value") p <- ggplot(dfm, aes(factor(Date), value, group = Brand, colour = Brand, label = Brand)) p1 <- p + geom_line(aes(size=2.2, alpha=0.7)) + geom_text(data = subset(dfm, Date == 1), aes(x = Date , size =2.2, hjust = 1, vjust=0)) + geom_text(data = subset(dfm, Date == 5), aes(x = Date , size =2.2, hjust = 0, vjust=0))+ theme_bw() + opts(legend.position = "none", panel.border = theme_blank()) p1 + theme_bw() + opts(legend.position = "none", panel.border = theme_blank())

    Read the article

  • Dynamic page with JSF

    - by Milan
    Hello everybody! I need to make dynamic web-page/form in JSF. Dynamic: In runtime I get the number of input fields Ill have together with buttons. I'll try to demonstrate you what I need to make. inputField1 inputField2 inputField3 BUTTON1 inputField4 inputField5 BUTTON2 inputField6 inputFiled7 inputField8 inputField9 BUTTON3 I hope you understand me what I want to make. So I when I click on some of the buttons I have to collect the data from the input fields and do something with the data. If I click BUTTON1 I collect data from inputField1 inputField2 inputField3 BUTTON2 inputField4 inputField5, etc. Probably I should create the input fields programmatically and group them in forms. Any idea how to do it? Any idea?

    Read the article

  • Apache works on http and https, SVN only on http

    - by user27880
    I asked a question about this before, and got most of it fixed. If I switch off https redirect and go to http://mydomain.com/svn/test0, I get the authentication window popping up, and I can enter my AD credentials, and bingo. Switching https redirect back on, if I go to http://mydomain.com I am automatically redirected to https, which is what I want, and the 'CerntOS test page' pops up. Perfect. The problem occurs when I want to go to one of my test repos via https. Here is my httpd.conf file, with confidential information suitably hosed... === NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] ServerName svn.mycompany.com ErrorLog logs/subversion-error_log CustomLog logs/subversion-access_log common Redirect permanent / https://svn.mycompany.com </VirtualHost> <VirtualHost svn.mycompany.com:443> SSLEngine On SSLCertificateFile /etc/httpd/ssl/wildcard.mycompany.com.crt SSLCertificateKeyFile /etc/httpd/ssl/wildcard.mycompany.com.key SSLCertificateChainFile /etc/httpd/ssl/intermediate.crt ServerName svn.mycompany.com ServerAdmin [email protected] ErrorLog logs/subversion-error_log CustomLog logs/subversion-access_log common <Location /svn> DAV svn SVNParentPath /usr/local/subversion SVNListParentPath off AuthName "Subversion Repositories" # NT Logon Details Require valid-user AuthBasicProvider file ldap AuthType Basic AuthzLDAPAuthoritative off AuthUserFile /etc/httpd/conf/svnpasswd AuthName "Subversion Server II" AuthLDAPURL "ldap://our-pdc:389/OU=Company Name,DC=com,DC=co,DC=uk?sAMAccountName?sub?(objectClass=*)" AuthLDAPBindDN "DOMAIN\subversion" AuthLDAPBindPassword XXXXXXX AuthzSVNAccessFile /etc/httpd/conf/svnaccessfile </Location> </VirtualHost> === Now, in ssl_error_log, I get === ==> /etc/httpd/logs/ssl_error_log <== [Fri Nov 01 16:07:55 2013] [error] [client XXX.XXX.XXX.XXX] File does not exist: /var/www/html/svn === This comes from the DocumentRoot directive further up the httpd.conf file, which of course points to /var/www/html. I know that this location is wrong, but how can I get SVN to serve the repo? I tried an Alias directive as so .. Alias /svn /usr/local/subversion .. but this didn't work. I tried to alter the Location directive. That didn't work either. Can someone help? I sense that this is so close to being solved ... Thanks. Edit: apachectl -S output: [root@svn conf]# apachectl -S VirtualHost configuration: 127.0.0.1:443 svn.mycompany.com (/etc/httpd/conf/httpd.conf:1020) wildcard NameVirtualHosts and default servers: default:443 svn.mycompany.com (/etc/httpd/conf.d/ssl.conf:74) *:80 is a NameVirtualHost default server svn.mycompany.com (/etc/httpd/conf/httpd.conf:1012) port 80 namevhost svn.mycompany.com (/etc/httpd/conf/httpd.conf:1012) Syntax OK

    Read the article

  • RSH between servers not working

    - by churnd
    I have two servers: one CentOS 5.8 & one Solaris 10. Both are joined to my workplace AD domain via PBIS-Open. A user will log into the linux server & run an application which issues commands over RSH to the solaris server. Some commands are also run on the linux server, so both are needed. Due to the application these servers are being used for (proprietary GE software), the software on the linux server needs to be able to issue rsh commands to the solaris server on behalf of the user (the user just runs a script & the rest is automatic). However, rsh is not working for the domain users. It does work for a local user, so I believe I have the necessary trust settings between the two servers correct. However, I can rlogin as a domain user from the linux server to the solaris server. SSH works too (how I wish I could use it). Some relevant info: via rlogin: [user@linux~]$ rlogin solaris connect to address 192.168.1.2 port 543: Connection refused Trying krb4 rlogin... connect to address 192.168.1.2 port 543: Connection refused trying normal rlogin (/usr/bin/rlogin) Sun Microsystems Inc. SunOS 5.10 Generic January 2005 solaris% via rsh: [user@linux ~]$ rsh solaris ls connect to address 192.168.1.2 port 544: Connection refused Trying krb4 rsh... connect to address 192.168.1.2 port 544: Connection refused trying normal rsh (/usr/bin/rsh) permission denied. [user@linux ~]$ relevant snippet from /etc/pam.conf on solaris: # # rlogin service (explicit because of pam_rhost_auth) # rlogin auth sufficient pam_rhosts_auth.so.1 rlogin auth requisite pam_lsass.so set_default_repository rlogin auth requisite pam_lsass.so smartcard_prompt try_first_pass rlogin auth requisite pam_authtok_get.so.1 try_first_pass rlogin auth sufficient pam_lsass.so try_first_pass rlogin auth required pam_dhkeys.so.1 rlogin auth required pam_unix_cred.so.1 rlogin auth required pam_unix_auth.so.1 # # Kerberized rlogin service # krlogin auth required pam_unix_cred.so.1 krlogin auth required pam_krb5.so.1 # # rsh service (explicit because of pam_rhost_auth, # and pam_unix_auth for meaningful pam_setcred) # rsh auth sufficient pam_rhosts_auth.so.1 rsh auth required pam_unix_cred.so.1 # # Kerberized rsh service # krsh auth required pam_unix_cred.so.1 krsh auth required pam_krb5.so.1 # I have not really seen anything useful in either system log that seem to be directly related to the failed login attempt. I've tail -f'd /var/adm/messages on solaris & /var/log/messages on linux during the failed attempts & nothing shows up. Maybe I need to be doing something else?

    Read the article

  • Simple MySQL Query taking 45 seconds (Gets a record and its "latest" child record)

    - by Brian Lacy
    I have a query which gets a customer and the latest transaction for that customer. Currently this query takes over 45 seconds for 1000 records. This is especially problematic because the script itself may need to be executed as frequently as once per minute! I believe using subqueries may be the answer, but I've had trouble constructing it to actually give me the results I need. SELECT customer.CustID, customer.leadid, customer.Email, customer.FirstName, customer.LastName, transaction.*, MAX(transaction.TransDate) AS LastTransDate FROM customer INNER JOIN transaction ON transaction.CustID = customer.CustID WHERE customer.Email = '".$email."' GROUP BY customer.CustID ORDER BY LastTransDate LIMIT 1000 I really need to get this figured out ASAP. Any help would be greatly appreciated!

    Read the article

  • Double of Total Problem

    - by Gopal
    Table1 ID | WorkTime ----------------- 001 | 10:50:00 001 | 00:00:00 002 | .... WorkTime Datatype is *varchar(. SELECT ID, CONVERT(varchar(10), TotalSeconds1 / 3600) + ':' + RIGHT('00' + CONVERT(varchar(2), (TotalSeconds1 - TotalSeconds1 / 3600 * 3600) / 60), 2) + ':' + RIGHT('00' + CONVERT(varchar(2), TotalSeconds1 - (TotalSeconds1 / 3600 * 3600 + (TotalSeconds1 - TotalSeconds1 / 3600 * 3600) / 60 * 60)), 2) AS TotalWork From ( SELECT ID, SUM(DATEDIFF(second, CONVERT(datetime, '1/1/1900'), CONVERT(datetime, '1/1/1900 ' + WorkTime))) AS TotalSeconds1 FROM table1 group by ID) AS tab1 where id = '001' The above Query is showing "double the total of time" For Example From table1 i want to calculate the total WorkTime, when i run the above query it is showing ID WorkTime 001 21:40:00 002..., But it should show like this ID Worktime 001 10:50:00 ..., How to avoid the double total of worktime. How to modify my query. Need Query Help

    Read the article

  • Running Visual Studio 2008 and 2010 at the same time

    - by aip.cd.aish
    A group of us are working on a project which we built with .NET 3.5 in Visual Studio 2008. I want to test out Visual Studio 2010 and .NET 4 (well, mainly for WPF 4). I am just wondering if I install VS 2010, will I still be able to use VS 2008 to open the first project. I know when I open older projects made in VS 2003/2005, I get an upgrade wizard. I do not want to upgrade the first project to 2010, since that would probably mean every one else has to use it too. I have not done this before, is it possible to run both versions of Visual Studio, where each version opens its own projects (this may not even be an issue, but I just wanted someone to confirm this, so that I don't spend a lot of time trying to undo changes)?

    Read the article

  • SSRS charts: How can I display the n biggest slices / bars?

    - by Adrian Grigore
    Hi, I know how to group chart slices / bars below a certain threshold together into one bar. But if the data displayed in the chart contains lots of small slices, collecting slices below, say 5% results in a huge "other" bar. On the other hand, if I collect only slices below a very small treshold, the chart could contain too much data to be readable. So, how can I set a hard limit for the amount of slices, show the n biggest slices and collect everything else into one "other" slice. How can I do that with SSRS 2008? Thanks, Adrian

    Read the article

< Previous Page | 385 386 387 388 389 390 391 392 393 394 395 396  | Next Page >