Search Results

Search found 21221 results on 849 pages for 'css media queries'.

Page 745/849 | < Previous Page | 741 742 743 744 745 746 747 748 749 750 751 752  | Next Page >

  • Using LinqExtender to make OData feed fails

    - by BurningIce
    A pretty simple question, has anyone here tried to make a OData feed based on a IQueryable created with LinqExtender? I have created a simple Linq-provider that supports Where, Select, OrderBy and Take and wanted to expose it as an OData Feed. I keep getting an error though, and the Exception is a NullReference with the following StackTrace at System.Data.Services.Serializers.Serializer.GetObjectKey(Object resource, IDataServiceProvider provider, String containerName) at System.Data.Services.Serializers.Serializer.GetUri(Object resource, IDataServiceProvider provider, ResourceContainer container, Uri absoluteServiceUri) at System.Data.Services.Serializers.SyndicationSerializer.WriteEntryElement(IExpandedResult expanded, Object element, Type expectedType, Uri absoluteUri, String relativeUri, SyndicationItem target) at System.Data.Services.Serializers.SyndicationSerializer.<DeferredFeedItems>d__0.MoveNext() at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteItems(XmlWriter writer, IEnumerable`1 items, Uri feedBaseUri) at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteFeedTo(XmlWriter writer, SyndicationFeed feed, Boolean isSourceFeed) at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteFeed(XmlWriter writer) at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteTo(XmlWriter writer) at System.Data.Services.Serializers.SyndicationSerializer.WriteTopLevelElements(IExpandedResult expanded, IEnumerator elements, Boolean hasMoved) at System.Data.Services.Serializers.Serializer.WriteRequest(IEnumerator queryResults, Boolean hasMoved) at System.Data.Services.ResponseBodyWriter.Write(Stream stream) I've kinda narrowed it down to a issue where LinqExtender wraps every returned object, so that my object actually inherits itself - thats at least how it looks like in the debugger. These two queries are basicly the same. The first is the legacy-api where the OrderBy and Select is regular Linq to Objects. The second query is a "real" linq-provider made with LinqExtender. var db = CalendarDataProvider.GetCalendarEntriesByDate(DateTime.Now, DateTime.Now.AddMonths(1), Guid.Empty) .OrderBy(o => o.Title) .Select(o => new ODataCalendarEntry(o)); var query = new ODataCalendarEntryQuery() .Where(o => o.Start > DateTime.Now && o.End < DateTime.Now.AddMonths(1)) .OrderBy(o => o.Title); When returning db for the OData feed everything is fine, but returning query throws a NullRefenceException. I've tried all kind of tricks and even tried to project all the data into a new object like this, but still the same error return query.Select(o => new ODataCalendarEntry { Title = o.Title, Start = o.Start, End = o.End, Name = o.Name });

    Read the article

  • Creating foreach loops using Code Igniter controller and view

    - by Tim
    Hello, This is a situation I have found myself in a few times and I just want clear it up once and for all. Best just to show you what I need to do in some example code. My Controller function my_controller() { $id = $this->uri->segment(3); $this->db->from('cue_sheets'); $this->db->where('id', $id); $data['get_cue_sheets'] = $this->db->get(); $this->db->from('clips'); $this->db->where('sheet_id', ' CUE SHEET ID GOES IN HERE ??? '); $data['get_clips'] = $this->db->get(); $this->load->view('show_sheets_and_clips', $data); } My View <?php if($get_cue_sheets->result_array()) { ?> <?php foreach($get_cue_sheets->result_array() as $sheetRow): ?> <h1><?php echo $sheetRow['sheet_name']; ?></h1> <br/> <?php if($get_clips->result_array()) { ?> <ul> <?php foreach($get_clips->result_array() as $clipRow): ?> <li><?php echo $clipRow['clip_name']; ?></li> <?php endforeach; ?> </ul> <?php } else { echo 'No Clips Found'; } ?> <?php endforeach; ?> <?php } ?> The problem I am having is the concept of passing data back to the controller from the view as I am sending the Database Queries off to the view as an array, when I really need to get some more information as to which sheet ID I am looking for to show the relevant clips. I hope this makes sense to someone out there. Thanks, Tim

    Read the article

  • "Priming" a whole database in MSSQL for first-hit speed

    - by David Spillett
    For a particular apps I have a set of queries that I run each time the database has been restarted for any reason (server reboot usually). These "prime" SQL Server's page cache with the common core working set of the data so that the app is not unusually slow the first time a user logs in afterwards. One instance of the app is running on an over-specced arrangement where the SQL box has more RAM than the size of the database (4Gb in the machine, the DB is under 1.5Gb currently and unlikely to grow too much relative to that in the near future). Is there a neat/easy way of telling SQL Server to go away and load everything into RAM? It could be done the hard way by having a script scan sysobjects & sysindexes and running SELECT * FROM <table> WITH(INDEX(<index_name>)) ORDER BY <index_fields> for every key and index found, which should cause every used page to be read at least once and so be in RAM, but is there a cleaner or more efficient way? All planned instances where the database server is stopped are out-of-normal-working-hours (all the users are at most one timezone away and unlike me none of them work at silly hours) so such a process (until complete) slowing down users more than the working set not being primed at all would is not an issue.

    Read the article

  • Using a class within a class?

    - by Josh
    I built myself a MySQL class which I use in all my projects. I'm about to start a project that is heavily based on user accounts and I plan on building my own class for this aswell. The thing is, a lot of the methods in the user class will be MySQL queries and methods from the MySQL class. For example, I have a method in my user class to update someone's password: class user() { function updatePassword($usrName, $newPass) { $con = mysql_connect('db_host', 'db_user', 'db_pass'); $sql = "UPDATE users SET password = '$newPass' WHERE username = '$userName'"; $res = mysql_query($sql, $con); if($res) return true; mysql_close($con); } } (I kind of rushed this so excuse any syntax errors :) ) As you can see that would use MySQL to update a users password, but without using my MySQL class, is this correct? I see no way in which I can use my MySQL class within my users class without it seeming dirty. Do I just use it the normal way like $DB = new DB();? That would mean including my mysql.class.php somewhere too... I'm quite confused about this, so any help would be appreciated, thanks.

    Read the article

  • jQuery server ping slowly but surely filling memory?

    - by danspants
    I use the following piece of code to test if our server is running whilst the user is in a page. I've also started adding other functions that grab small amounts of data that are constantly changing and are to be relayed to the user (Files waiting for download, messages, reports etc). I've noticed recently that if I leave any page open (all pages contain the same function), the browser takes up more and more system memory which I can only attribute to this regular task (overnight it reached 1.6 gb). Is there some way of clearing out the data that is being accumulated? Is this normal behaviour? As far as i can tell, every time I call the function it should overwrite the previously retrieved data? function testServer(){ jQuery.ajax({ type:"HEAD", url:"/media/d_arrow_blue.png", error: function(msg) { jQuery.jGrowl("Server Disconnected"); } }); //retrieves count of files awaiting download - move to seperate function jQuery.get("/get_files/",{"type":"count"},function(data) { jQuery("#downloadList").children("div").text(data); }); }; jQuery().doTimeout(6000,function() { testServer(); return true; });

    Read the article

  • Poor execution plans when using a filter and CONTAINSTABLE in a query

    - by Paul McLoughlin
    We have an interesting problem that I was hoping someone could help to shed some light on. At a high level the problem is as below: The following query executes quickly (1 second): SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID but if we add a filter to the query, then it takes approximately 2 minutes to return: SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID WHERE SA.CHG_DATE'19 Feb 2010' Looking at the execution plan for the two queries, I can see that in the second case there are two places where there are huge differences between the actual and estimated number of rows, these being: 1) For the FulltextMatch table valued function where the estimate is approx 22,000 rows and the actual is 29 million rows (which are then filtered down to 1670 rows before the join) and 2) For the index seek on the full text index, where the estimate is 1 row and the actual is 13,000 rows As a result of the estimates, the optimiser is choosing to use a nested loops join (since it assumes a small number of rows) hence the plan is inefficient. We can work around the problem by either (a) parameterising the query and adding an OPTION (OPTIMIZE FOR UNKNOWN) to the query or (b) by forcing a HASH JOIN to be used. In both of these cases the query returns in sub 1 second and the estimates appear reasonable. My question really is 'why are the estimates being used in the poorly performing case so wildly inaccurate and what can be done to improve them'? Statistics are up to date on the indexes on the indexed view being used here. Any help greatly appreciated.

    Read the article

  • Why are changes to coffeescript files not being compiled when my Rails 3.2.0 app is in development mode?

    - by ben
    Normally, any changes I make to .js.coffee files in my Rails 3.2.0 app in development mode take effect when I refresh the page. All of a sudden, this is not happening. If I do rake assets:precompile, then the changes are shown, but then if I do rake assets:clean they go back to not being shown. What is causing this? Edit: Restarting the server makes the changes show. Why isn't this happening automatically as before? Edit: Here is my development.rb Myapp::Application.configure do # Settings specified here will take precedence over those in config/application.rb # In the development environment your application's code is reloaded on # every request. This slows down response time but is perfect for development # since you don't have to restart the web server when you make code changes. config.cache_classes = false # Log error messages when you accidentally call methods on nil. config.whiny_nils = true # Show full error reports and disable caching config.consider_all_requests_local = true config.action_controller.perform_caching = false # Don't care if the mailer can't send config.action_mailer.raise_delivery_errors = false # Print deprecation notices to the Rails logger config.active_support.deprecation = :log # Only use best-standards-support built into browsers config.action_dispatch.best_standards_support = :builtin # Raise exception on mass assignment protection for Active Record models config.active_record.mass_assignment_sanitizer = :strict # Log the query plan for queries taking more than this (works # with SQLite, MySQL, and PostgreSQL) config.active_record.auto_explain_threshold_in_seconds = 0.5 # Do not compress assets config.assets.compress = false # Expands the lines which load the assets config.assets.debug = true config.action_mailer.default_url_options = { :host => 'localhost:3000' } config.log_level = :warn end

    Read the article

  • What is ADO.NET?

    - by ChrisC
    I've written a few Access db's and used some light VBA, and had an OO class. Now I'm undertaking to write a C# db app. I've got VS and System.Data.SQLite installed and connected, and have entered my tables and columns, but that's where I'm stuck. I'm trying to find what info and tutorials I need to look for, but there are a lot of terms I don't understand and I don't know if or exactly how they apply to my project. I've read definitions for these terms (Wikipedia and elsewhere), but the definitions don't make sense to me because I don't know what they are or how they fit together or which ones are optional or not optional for my project. Some of the terms on the System.Data.SQLite website (I wanted to use System.Data.SQLite for my db). I figured my first step in my project would be to get the db and queries set up and tested. Please tell me if there are other pieces of this part of the puzzle I will need to know about, too. If I can figure out what's what, I can start looking for the tutorials I need. (btw, I know I don't want to use an ORM because my app is so simple, and because I want to keep from biting off too much too soon.) Thank you very much. SQLite.NET ADO.NET ADO.NET provider ADO.NET 2.0 Provider for SQLite SQLite Entity Framework SQLite Entity Framework provider

    Read the article

  • Can a large transaction log cause cpu hikes to occur

    - by Simon Rigby
    Hello all, I have a client with a very large database on Sql Server 2005. The total space allocated to the db is 15Gb with roughly 5Gb to the db and 10 Gb to the transaction log. Just recently a web application that is connecting to that db is timing out. I have traced the actions on the web page and examined the queries that execute whilst these web operation are performed. There is nothing untoward in the execution plan. The query itself used multiple joins but completes very quickly. However, the db server's CPU hikes to 100% for a few seconds. The issue occurs when several simultaneous users are working on the system (when I say multiple .. read about 5). Under this timeouts start to occur. I suppose my question is, can a large transaction log cause issues with CPU performance? There is about 12Gb of free space on the disk currently. The configuration is a little out of my hands but the db and log are both on the same physical disk. I appreciate that the log file is massive and needs attending to, but I'm just looking for a heads up as to whether this may cause CPU spikes (ie trying to find the correlation). The timeouts are a recent thing and this app has been responsive for a few years (ie its a recent manifestation). Many Thanks,

    Read the article

  • Collecting high-volume video viewing data

    - by DanK
    I want to add tracking to our Flash-based media player so that we can provide analytics that show what sections of videos are being watched (at the moment, we just register a view when a video starts playing) For example, if a viewer watches the first 30 seconds of a video and then clicks away to something else, we want the data to reflect that. Likewise, if someone watches the first 10 seconds, then scrubs the timeline to the last minute of the video and watches that, we want to register viewing on the parts watched and not the middle section. My first thought was to collect up the viewing data in the player and send it all to the server at the end of a viewing session. Unfortunately, Flash does not seem to have an event that you can hook into when a viewer clicks away from the page the movie is on (probably a good thing - it would be open to abuse) So, it looks like we're going to have to make regular requests to the server as the video is playing. This is obviously going to lead to a high volume of requests when there are large numbers of simultaneous viewers. The simple approach of dumping all these 'heartbeat' events from clients to a database feels like it will quickly become unmanageable so I'm wondering whether I should be taking an approach where viewing sessions are cached in memory and flushed to database when they become inactive (based on a timeout). That way, the data could be stored as time spans rather than individual heartbeats. So, to the question - what is the best way to approach dealing with this kind of high-volume viewing data? Are there any good existing architectures/patterns? Thanks, Dan.

    Read the article

  • How To Find Reasons of Why Site Goes Online/Offline

    - by HollerTrain
    Seems today a website I manage has been going online and offline throughout the entire day. I have no idea what is causing the issue so I am seeking guidance on where to start. It is a Wordpress based site. So here is what I DO know: I use a program that pings the server every minute and when the server is not responding me it emails me, so I can know exactly when the site is online and offline. The site between 8pm to 12pm 12.28, and around the 1a hour early morning 12.29 (New York City timezone, and all times below are in same timezone). At the time of the ups/downs I see a lot of strain on the memory usage. Look at the load average when the site is going online/offline (http://screencast.com/t/BRlfXkqrbJII). Then I ran this command to restart http (http://screencast.com/t/usVtYWZ2Qi) and the memory usage then goes down to this (http://screencast.com/t/VdTIy3bgZiQB). An hour after I restarted http, the site then went offline/online so restarting the http didn't do much help. When the site is going offline/online, I ran the top command and get this (http://screencast.com/t/zEwr7YQj3). Here is a top command when the site is at it's lowest (http://screencast.com/t/eaMfha9lbT - so this would be dubbged "normal"). Here is a bandwidth report (http://screencast.com/t/AS0h2CH1Gypq). The traffic doesn't seem to be that much (http://screencast.com/t/s7hrWNNic1K), but looking at my times the site is going up/down this may be one of the reasons? I have the dvp Nitro package at Media Temple (http://mediatemple.net/webhosting/nitro/). So at this point I would request some help in trying to figure out what the cause of this is, and how I can go about pinpointing this issue. ANY HELP is greatly appreciated.

    Read the article

  • Capture data read from file into string stream Java

    - by halluc1nati0n
    I'm coming from a C++ background, so be kind on my n00bish queries... I'd like to read data from an input file and store it in a stringstream. I can accomplish this in an easy way in C++ using stringstreams. I'm a bit lost trying to do the same in Java. Following is a crude code/way I've developed where I'm storing the data read line-by-line in a string array. I need to use a string stream to capture my data into (rather than use a string array).. Any help? char dataCharArray[] = new char[2]; int marker=0; String inputLine; String temp_to_write_data[] = new String[100]; // Now, read from output_x into stringstream FileInputStream fstream = new FileInputStream("output_" + dataCharArray[0]); // Convert our input stream to a BufferedReader BufferedReader in = new BufferedReader (new InputStreamReader(fstream)); // Continue to read lines while there are still some left to read while ((inputLine = in.readLine()) != null ) { // Print file line to screen // System.out.println (inputLine); temp_to_write_data[marker] = inputLine; marker++; }

    Read the article

  • How do you implement a good profanity filter?

    - by Ben Throop
    Many of us need to deal with user input, search queries, and situations where the input text can potentially contain profanity or undesirable language. Oftentimes this needs to be filtered out. Where can one find a good list of swear words in various languages and dialects? Are there APIs available to sources that contain good lists? Or maybe an API that simply says "yes this is clean" or "no this is dirty" with some parameters? What are some good methods for catching folks trying to trick the system, like a$$, azz, or a55? Bonus points if you offer solutions for PHP. :) Edit: Response to answers that say simply avoid the programmatic issue: I think there is a place for this kind of filter when, for instance, a user can use public image search to find pictures that get added to a sensitive community pool. If they can search for "penis", then they will likely get many pictures of, yep. If we don't want pictures of that, then preventing the word as a search term is a good gatekeeper, though admittedly not a foolproof method. Getting the list of words in the first place is the real question. So I'm really referring to a way to figure out of a single token is dirty or not and then simply disallow it. I'd not bother preventing a sentiment like the totally hilarious "long necked giraffe" reference. Nothing you can do there. :)

    Read the article

  • Facebook Connect for simple authentication?

    - by Starnzy
    Hi I have an ASP.net website which I want to introduce 'Facebook Connect' functionality into, purely for account login/creation purposes. I want a user to be able to click the 'Login using Facebook' type button, and to then log that user into my website based on a userid lookup from the Facebook response. I have a couple of questions surrounding this: Presumeably I can do all of this using the Facebook API - without the need for an actual pretty public facing 'application' on Facebook? I simply want to utilise the Facebook API for authenticating an account. I'm not interested in creating some app for doing something 'within' facebook itself. I have located some code snippets online and tried using the Facebook Developer Toolkit, calling the getInfo method, and whilst it does come back to my website with a uid, none of the other user information is present within the response, like Email, Name etc. The uid is the only populated field in the response. Here is the code I use: if (ConnectAuthentication.isConnected()) { API api = new API(); api.ApplicationKey = ConnectAuthentication.ApiKey; api.SessionKey = ConnectAuthentication.SessionKey; api.Secret = ConnectAuthentication.SecretKey; api.uid = ConnectAuthentication.UserID; //Display user data captured from the Facebook API. facebook.Schema.user facebookUser = null; try { facebookUser = api.users.getInfo(); User user = new User(); user.FacebookUser = facebookUser; user.IsFacebookUser = true; return user; } catch { return null; } } else { return null; } Can anyone please help with either/both of these queries? Thanks in advance...

    Read the article

  • C# Multi CheckboxList update inserts checked records but doesn't delete unchecked records

    - by DLL
    I have a multi checkboxlist on a formview. Both use queries in a tableadapter. I'm using VS 2012. When the user updates the form, I use the following code to update the checkbox data. If a user checks a new box, a new record is inserted correctly, however if the user unchecks a box the existing record is not deleted. The delete query works fine if I run it from the query builder in the table adapter, it's reaching the expected line in the code correctly, all values are correct and I receive no errors. I use a similar query to delete records when the form level data is deleted which works fine. The very last line is the one that doesn't work. Query: DELETE FROM [SLA_Categories] WHERE (([SLA_ID] = @SLA_ID) AND ([Choice_ID] = @Choice_ID)) protected void FormView1_ItemUpdating(object sender, FormViewUpdateEventArgs e) { if (FormView1.DataKey.Value != null) { Categs = (CheckBoxList)FormView1.FindControl("CheckBoxList1"); CurrentSLA_ID = (int)FormView1.DataKey.Value; } if (CurrentSLA_ID > 0) { foreach (ListItem li in Categs.Items) { // See if there's a record for the current sla in this category int CurrentChoice_ID = Convert.ToInt32(li.Value); SLADataSetTableAdapters.SLA_CategoriesTableAdapter myAdapter; myAdapter = new SLADataSetTableAdapters.SLA_CategoriesTableAdapter(); int myCount = (int)myAdapter.FindCategoryBySLA_IDAndChoice_ID(CurrentSLA_ID, CurrentChoice_ID); // If this category is checked and there is not an existing rec, insert one if (li.Selected == true && myCount < 1) { // Insert a rec for this sla myAdapter.InsertCategory(CurrentChoice_ID, CurrentSLA_ID); } // If this category is unchecked and there is and existing rec, delete it if (li.Selected == false && myCount > 0) { // Delete this rec myAdapter.DeleteCategoryBySLA_IDAndChoice_ID(CurrentChoice_ID, CurrentSLA_ID); } } } }

    Read the article

  • is mysql index useful on column 'state' when only doing bit-operations on the column?

    - by Geert-Jan
    I have a lot of domain entities (stored in mysql) which undergo lots of different operations. Each operation is executed from a different program. I need to keep (flow)-state for these entities which I implemented in as a long field 'flowstate' used as a bitset. to query mysql for entities which have undergone a certain operation I do something like: select * from entities where state >> 7 & 1 = 1 Indicating bit 7 (cooresponding to operation 7) has run. (<-- simplified) Anyway, I really didn't pay attention to the performance implications of this setup in the beginning, and I think I'm in a bit of trouble since queries as the above run pretty slow. What I'd like to know: Does an mysql index on 'flowstate' help at all? After all it's not a single value Mysql can quickly find using a binary sort or whatever. If it doesn't, are there any other things I could do to speed things up? . Are there special 'mask-indices' for fields with use-cases as the above? TIA, Geert-jan

    Read the article

  • Using pam_python in a script running with mod_python

    - by markys
    Hi ! I would like to develop a web interface to allow users of a Linux system to do certain tasks related to their account. I decided to write the backend of the site using Python and mod_python on Apache. To authenticate the users, I thought I could use python_pam to query the PAM service. I adapted the example bundled with the module and got this: # out is the output stream used to print debug def auth(username, password, out): def pam_conv(aut, query_list, user_data): out.write("Query list: " + str(query_list) + "\n") # List to store the responses to the different queries resp = [] for item in query_list: query, qtype = item # If PAM asks for an input, give the password if qtype == PAM.PAM_PROMPT_ECHO_ON or qtype == PAM.PAM_PROMPT_ECHO_OFF: resp.append((str(password), 0)) elif qtype == PAM.PAM_PROMPT_ERROR_MSG or qtype == PAM.PAM_PROMPT_TEXT_INFO: resp.append(('', 0)) out.write("Our response: " + str(resp) + "\n") return resp # If username of password is undefined, fail if username is None or password is None: return False service = 'login' pam_ = PAM.pam() pam_.start(service) # Set the username pam_.set_item(PAM.PAM_USER, str(username)) # Set the conversation callback pam_.set_item(PAM.PAM_CONV, pam_conv) try: pam_.authenticate() pam_.acct_mgmt() except PAM.error, resp: out.write("Error: " + str(resp) + "\n") return False except: return False # If we get here, the authentication worked return True My problem is that this function does not behave the same wether I use it in a simple script or through mod_python. To illustrate this, I wrote these simple cases: my_username = "markys" my_good_password = "lalala" my_bad_password = "lololo" def handler(req): req.content_type = "text/plain" req.write("1- " + str(auth(my_username,my_good_password,req) + "\n")) req.write("2- " + str(auth(my_username,my_bad_password,req) + "\n")) return apache.OK if __name__ == "__main__": print "1- " + str(auth(my_username,my_good_password,sys.__stdout__)) print "2- " + str(auth(my_username,my_bad_password,sys.__stdout__)) The result from the script is : Query list: [('Password: ', 1)] Our response: [('lalala', 0)] 1- True Query list: [('Password: ', 1)] Our response: [('lololo', 0)] Error: ('Authentication failure', 7) 2- False but the result from mod_python is : Query list: [('Password: ', 1)] Our response: [('lalala', 0)] Error: ('Authentication failure', 7) 1- False Query list: [('Password: ', 1)] Our response: [('lololo', 0)] Error: ('Authentication failure', 7) 2- False I don't understand why the auth function does not return the same value given the same inputs. Any idea where I got this wrong ? Here is the original script, if that could help you. Thanks a lot !

    Read the article

  • NHibernate stored procedure problem

    - by Calvin
    I'm having a hard time trying to get my stored procedure works with NHibernate. The data returned from the SP does not correspond to any database table. This is my mapping file: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="DomainModel" namespace="DomainModel.Entities"> <sql-query name="DoSomething"> <return class="SomeClass"> <return-property name="ID" column="ID"/> </return> exec [dbo].[sp_doSomething] </sql-query> </hibernate-mapping> Here is my domain class: namespace DomainModel.Entities { public class SomeClass { public SomeClass() { } public virtual Guid ID { get; set; } } } When I run the code, it fails with Exception Details: NHibernate.HibernateException: Errors in named queries: {DoSomething} at line 80 Line 78: config.Configure(Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "NHibernate.config")); Line 79: Line 80: g_sessionFactory = config.BuildSessionFactory(); When I debug into NHibernate code, it seems that SomeClass is not added to the persister dictionary because there isn't a class mapping (only sql-query) defined in hbm.xml. And later on in CheckNamedQueries function, it is not able to find the persistor for SomeClass. I've checked all the obvious things (e.g. make hbm as an embedded resource) and my code isn't too much different from other samples I found on the web, but somehow I just can't get it working. Any idea how I can resolve this issue?

    Read the article

  • PL/SQL - How to pull data from 3 tables based on latest created date

    - by Nancy
    Hello, I'm hoping someone can help me as I've been stuck on this problem for a few days now. Basically I'm trying to pull data from 3 tables in Oracle: 1) Orders Table 2) Vendor Table and 3) Master Data Table. Here's what the 3 tables look like: Table 1: BIZ_DOC2 (Orders table) OBJECTID (Unique key) UNIQUE_DOC_NAME (Document Name i.e. ORD-005) CREATED_AT (Date the order was created) Table 2: UDEF_VENDOR (Vendors Table): PARENT_OBJECT_ID (This matches up to the ObjectId in the Orders table) VENDOR_OBJECT_NAME (This is the name of the vendor i.e. Acme) Table 3: BIZ_UNIT (Master Data table) PARENT_OBJECT_ID (This matches up to the ObjectID in the Orders table) BIZ_UNIT_OBJECT_NAME (This is the name of the business unit i.e. widget A, widget B) Note: The Vendors Table and Master Data do not have a link between them except through the Orders table. I can join all of the data from the tables and it looks something like this: Before selecting latest order date: ORD-005 | Widget A | Acme | 3/14/10 ORD-005 | Widget B | Acme | 3/14/10 ORD-004 | Widget C | Acme | 3/10/10 Ideally I'd like to return the latest order for each vendor. However, each order may contain multiple business units (e.g. types of widgets) so if a Vendor's latest record is ORD-005 and the order contains 2 business units, here's what the result set should look like by the following columns: UNIQUE_DOC_NAME, BIZ_UNIT_OBJECT_NAME, VENDOR_OBJECT_NAME, CREATED_AT After selecting by latest order date: ORD-005 | Widget A | Acme | 3/14/10 ORD-005 | Widget B | Acme | 3/14/10 I tried using Select Max and several variations of sub-queries but I just can't seem to get it working. Any help would be hugely appreciated!

    Read the article

  • How do i write this jpql query? java

    - by Nitesh Panchal
    Hello, Say i have 5 tables, tblBlogs tblBlogPosts tblBlogPostComment tblUser tblBlogMember BlogId BlogPostsId BlogPostCommentId UserId BlogMemberId BlogTitle BlogId CommentText FirstName UserId PostTitle BlogPostsId BlogId BlogMemberId Now i want to retrieve only those blogs and posts for which blogMember has actually commented. So in short, how do i write this plain old sql :- Select b.BlogTitle, bp.PostTitle, bpc.CommentText from tblBlogs b Inner join tblBlogPosts bp on b.BlogId = bp.BlogId Inner Join tblBlogPostComment bpc on bp.BlogPostsId = bpc.BlogPostsId Inner Join tblBlogMember bm On bpc.BlogMemberId = bm.BlogMemberId Where bm.UserId = 1; As you can see, everything is Inner join, so only that row will be retrieved for which the user has commented on some post of some blog. So, suppose he has joined 3 blogs whose ids are 1,2,3 (The blogs which user has joined are in tblBlogMembers) but the user has only commented in blog 2 (of say BlogPostId = 1). So that row will be retrieved and 1,3 won't as it is Inner Join. How do i write this kind of query in jpql? In jpql, we can only write simple queries like say :- Select bm.blogId from tblBlogMember Where bm.UserId = objUser; Where objUser is supplied using :- em.find(User.class,1); Thus once we get all blogs(Here blogId represents a blog object) which user has joined, we can loop through and do all fancy things. But i don't want to fall in this looping business and write all this things in my java code. Instead, i want to leave that for database engine to do. So, how do i write the above plain sql into jpql? and what type of object the jpql query will return? because i am only selecting few fields from all table. In which class should i typecast the result to? I think i posted my requirement correctly, if i am not clear please let me know. Thanks in advance :).

    Read the article

  • Translating Where() to sql

    - by MBoros
    Hi. I saw DamienG's article (http://damieng.com/blog/2009/06/24/client-side-properties-and-any-remote-linq-provider) in how to map client properties to sql. i ran throgh this article, and i saw great potential in it. Definitely mapping client properties to SQL is an awesome idea. But i wanted to use this for something a bit more complicated then just concatenating strings. Atm we are trying to introduce multilinguality to our Business objects, and i hoped we could leave all the existing linq2sql queries intact, and just change the code of the multilingual properties, so they would actually return the given property in the CurrentUICulture. The first idea was to change these fields to XMLs, and then try the Object.Property.Elements().Where(...), but it got stuck on the Elements(), as it couldnt translate it to sql. I read somewhere that XML fields are actually regarded as strings, and only on the app server they become XElements, so this way the filtering would be on the app server anyways, not the DB. Fair point, it wont work like this. Lets try something else... SO the second idea was to create a PolyGlots table (name taken from http://weblogic.sys-con.com/node/102698?page=0,1), a PolyGlotTranslations table and a Culture table, where the PolyGlots would be referenced from each internationalized property. This way i wanted to say for example: private static readonly CompiledExpression<Announcement, string> nameExpression = DefaultTranslationOf<Announcement> .Property(e => e.Name) .Is(e=> e.NamePolyGlot.PolyGlotTranslations .Where(t=> t.Culture.Code == Thread.CurrentThread.CurrentUICulture.Name) .Single().Value ); now unfortunately here i get an error that the Where() function cannot be translated to sql, what is a bit disappointing, as i was sure it will go through. I guess it is failing, cause the IEntitySet is basically an IEnumerable, not IQueryable, am i right? Is there another way to use the compiledExpressions class to achieve this goal? Any help appreciated.

    Read the article

  • sql query - how to apply limit within group by

    - by Raj
    hey guys assuming i have a table named t1 with following fields: ROWID, CID, PID, Score, SortKey it has the following data: 1, C1, P1, 10, 1 2, C1, P2, 20, 2 3, C1, P3, 30, 3 4, C2, P4, 20, 3 5, C2, P5, 30, 2 6, C3, P6, 10, 1 7, C3, P7, 20, 2 what query do i write so that it applies group by on CID, but instead of returning me 1 single result per group, it returns me a max of 2 results per group. also where condition is score = 20 and i want the results ordered by CID and SortKey. If I had to run my query on above data, i would expect the following result: RESULTS FOR C1 - note: ROWID 1 is not considered as its score < 20 C1, P2, 20, 2 C1, P3, 30, 3 RESULTS FOR C2 - note: ROWID 5 appears before ROWID 4 as ROWID 5 has lesser value SortKey C2, P5, 30, 2 C2, P4, 20, 3 RESULTS FOR C3 - note: ROWID 6 does not appear as its score is less than 20 so only 1 record returned here C3, P7, 20, 2 IN SHORT, I WANT A LIMIT WITHIN A GROUP BY. I want the simplest solution and want to avoid temp tables. sub queries are fine. also note i am using sqlite for this

    Read the article

  • Important question about linq to SQL performance on high loaded web applications

    - by Alex
    I started working with linq to SQL several weeks ago. I got really tired of working with SQL server directly through the SQL queries (sqldatareader, sqlcommand and all this good stuff).  After hearing about linq to SQL and mvc I quickly moved all my projects to these technologies. I expected linq to SQL work slower but it suprisongly turned out to be pretty fast, primarily because I always forgot to close my connections when using datareaders. Now I don't have to worry about it. But there's one problem that really bothers me. There's one page that's requested thousands of times a day. The system gets data in the beginning, works with it and updates it. Primarily the updates are ++ @ -- (increase and decrease values). I used to do it like this UPDATE table SET value=value+1 WHERE ID=@I'd It worked with no problems obviously. But with linq to SQL the data is taken in the beginning, moved to the class, changed and then saved. Stats.registeredusers++; Db.submitchanges(); Let's say there were 100 000 users. Linq will say "let it be 100 001" instead of "let it be increased by 1". But if there value of users has already been increased (that happens in my site all the time) then linq will be like oops, this value is already 100 001. Whatever I'll throw an exception" You can change this behavior so that it won't throw an exception but it still will not set the value to 100 002. Like I said, it happened with me all the time. The stas value was increased twice a second on average. I simply had to rewrite this chunk of code with classic ado net. So my question is how can you solve the problem with linq

    Read the article

  • Django Image Upload: IOErrno2 Could not find path -- and yet it's saving the image there anyway?

    - by Rob
    I have an issue where the local version of django is handling image upload as expected but my server is not. Note: I am using a Django Container on MediaTemple.net (grid server) Here is my code. def view_settings(request): <snip> if request.POST: success_msgs = () mForm = MainProfileForm(request.POST, request.FILES, instance = mProfile) pForm = ChangePasswordForm(request.POST) eForm = ChangeEmailForm(request.POST) if mForm.is_valid(): m = mForm.save(commit = False) if mForm.cleaned_data['avatar']: m.avatar = upload_photo(request.FILES['avatar'], settings.AVATAR_SAVE_LOCATION) m.save() success_msgs += ('profile pictured updated',) <snip> def upload_photo(data,saveLocation): savePath = os.path.join(settings.MEDIA_ROOT, saveLocation, data.name) destination = open(savePath, 'wb+') for chunk in data.chunks(): destination.write(chunk) destination.close() return os.path.join(saveLocation, data.name) Here's where it gets whacky and I was hoping someone could shed a light on this error, because either a) it's the wrong error code, or b) something is happening with the file before it's completely handled. To recap, the file was actually uploaded to the server in the intended directory - and yet this err msg continues to persist. IOError at /user/settings [Errno 2] No such file or directory: u'/home/user66666/domains/example.com/html/media/images/avatars/DSC03852.JPG' Environment: Request Method: POST Request URL: http://111.111.111.111:2011/user/settings Django Version: 1.0.2 final Python Version: 2.4.4 Installed Applications: ['django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'ctrlme', 'usertools', 'easy_thumbnails'] Installed Middleware: ('django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware') Traceback: File "/home/user6666/containers/django/leonidas/usertools/views.py" in view_settings m.avatar = upload_photo(request.FILES['avatar'], settings.AVATAR_SAVE_LOCATION) File "/home/user666666/containers/django/leonidas/usertools/functions.py" in upload_photo destination = open(savePath, 'wb+')

    Read the article

  • database design suggestion needed

    - by JMSA
    I need to design a table for daily sales of pharmaceutical products. There are hundreds of types of products available {Name, code}. Thousands of sales-persons are employed to sell those products{name, code}. They collect products from different depots{name, code}. They work in different Areas - Zones - Markets - Outlets, etc. {All have names and codes} Each product has various types of prices {Production Price, Trade Price, Business Price, Discount Price, etc.}. And, sales-persons are free to choose from those combination to estimate the sales price. The problem is, daily sales requires huge amount of data-entry. Within couple of years there may be gigabytes of data (if not terabytes). If I need to show daily, weekly, monthly, quarterly and yearly sales reports there will be various types of sql queries I shall need. This is my initial design: Product {ID, Code, Name, IsActive} ProductXYZPriceHistory {ID, ProductID, Date, EffectDate, Price, IsCurrent} SalesPerson {ID, Code, Name, JoinDate, and so on..., IsActive} SalesPersonSalesAraeaHistory {ID, SalesPersonID, SalesAreaID, IsCurrent} Depot {ID, Code, Name, IsActive} Outlet {ID, Code, Name, AreaID, IsActive} AreaHierarchy {ID, Code, Name, PrentID, AreaLevel, IsActive} DailySales {ID, ProductID, SalesPersonID, OutletID, Date, PriceID, SalesPrice, Discount, etc...} Now, apart from indexing, how can I normalize my DailySales table to have a fine grained design that I shall not need to change for years to come? Please show me a sample design of only the DailySales data-entry table (from which all types of reports would be queried) on the basis of above information. I don't need a detailed design advice. I just need an advice regarding only the DailySales table. Is there any way to break this particular table to achieve granularity?

    Read the article

< Previous Page | 741 742 743 744 745 746 747 748 749 750 751 752  | Next Page >