Search Results

Search found 43986 results on 1760 pages for 'sql session state'.

Page 606/1760 | < Previous Page | 602 603 604 605 606 607 608 609 610 611 612 613  | Next Page >

  • How can I implement NHibernate session per request without a dependency on NHibernate?

    - by Ben
    I've raised this question before but am still struggling to find an example that I can get my head around (please don't just tell me to look at the S#arp Architecture project without at least some directions). So far I have achieved near persistance ignorance in my web project. My repository classes (in my data project) take an ISession in the constructor: public class ProductRepository : IProductRepository { private ISession _session; public ProductRepository(ISession session) { _session = session; } In my global.asax I expose the current session and am creating and disposing session on beginrequest and endrequest (this is where I have the dependency on NHibernate): public static ISessionFactory SessionFactory = CreateSessionFactory(); private static ISessionFactory CreateSessionFactory() { return new Configuration() .Configure() .BuildSessionFactory(); } protected MvcApplication() { BeginRequest += delegate { CurrentSessionContext.Bind(SessionFactory.OpenSession()); }; EndRequest += delegate { CurrentSessionContext.Unbind(SessionFactory).Dispose(); }; } And finally my StructureMap registry: public AppRegistry() { For<ISession>().TheDefault .Is.ConstructedBy(x => MvcApplication.SessionFactory.GetCurrentSession()); For<IProductRepository>().Use<ProductRepository>(); } It would seem I need my own generic implementations of ISession and ISessionFactory that I can use in my web project and inject into my repositories? I'm a little stuck so any help would be appreciated. Thanks, Ben

    Read the article

  • Presenting Designing an SSIS Execution Framework to Steel City SQL 18 Jan 2011!

    - by andyleonard
    I'm honored to present Designing an SSIS Execution Framework (Level 300) to Steel City SQL - the Birmingham Alabama chapter of PASS - on 18 Jan 2011! The meeting starts at 6:00 PM 18 Jan 2011 and will be held at: New Horizons Computer Learning Center 601 Beacon Pkwy. West Suite 106 Birmingham, Alabama, 35209 ( Map for directions ) Abstract In this “demo-tastic” presentation, SSIS trainer, author, and consultant Andy Leonard explains the what, why, and how of an SSIS framework that delivers metadata-driven...(read more)

    Read the article

  • New Book! SQL Server 2012 Integration Services Design Patterns!

    - by andyleonard
    SQL Server 2012 Integration Services Design Patterns has been released! The book is done and available thanks to the hard work and dedication of a great crew: Michelle Ufford ( Blog | @sqlfool ) – co-author Jessica M. Moss ( Blog | @jessicammoss ) – co-author Tim Mitchell ( Blog | @tim_mitchell ) – co-author Matt Masson ( Blog | @mattmasson ) – co-author Donald Farmer ( Blog | @donalddotfarmer ) – foreword David Stein ( Blog | @made2mentor ) – technical editing Mark Powers – editing Jonathan Gennick...(read more)

    Read the article

  • Is there a good way to QuickCheck Happstack.State methods?

    - by Paul Kuliniewicz
    I have a set of Happstack.State MACID methods that I want to test using QuickCheck, but I'm having trouble figuring out the most elegant way to accomplish that. The problems I'm running into are: The only way to evaluate an Ev monad computation is in the IO monad via query or update. There's no way to create a purely in-memory MACID store; this is by design. Therefore, running things in the IO monad means there are temporary files to clean up after each test. There's no way to initialize a new MACID store except with the initialValue for the state; it can't be generated via Arbitrary unless I expose an access method that replaces the state wholesale. Working around all of the above means writing methods that only use features of MonadReader or MonadState (and running the test inside Reader or State instead of Ev. This means forgoing the use of getRandom or getEventClockTime and the like inside the method definitions. The only options I can see are: Run the methods in a throw-away on-disk MACID store, cleaning up after each test and settling for starting from initialValue each time. Write the methods to have most of the code run in a MonadReader or MonadState (which is more easily testable), and rely on a small amount of non-QuickCheck-able glue around it that calls getRandom or getEventClockTime as necessary. Is there a better solution that I'm overlooking?

    Read the article

  • Why would the value of the Session object go away?

    - by Michael
    In ASP.NET/VB.NET 2005 I've created a small web app with two forms. In Default there is a button and a text box and a link. When you press the button whatever is in the text box is put into the Session. Protected Sub Button1_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles Button1.Click System.Web.HttpContext.Current.Session("mykey") = Me.TextBox1.Text End Sub When you press the link it's NavigateURL property is "~/Retrieve.aspx" On the Retrieve page it pulls the information out of the Session (that is, whatever you typed in the text box). Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Dim myValue As String = CType(System.Web.HttpContext.Current.Session("mykey"), String) Response.Write(myValue) End Sub Now it works on my machine and when deployed to the production web-site server if I run the browser remotely there it works fine - What you type on the first page appears on the second page. But when I run it from my machine to the production web server the second page is always blank. Why? It doesnt seem like a timeout issue - brand new web.config taking the defaults. Any pointers would be greatly appreciated! Thanks!

    Read the article

  • VirtualHosting doesn't work. Logs me in through previous session

    - by Pablo
    When I log in with one browser session, I have to log in, but when I open another session it has automatically logged me in (as if I've picked up session 1), this does not happen if I use http://192.168.0.9:9070 It forces me to log in each time. So I know the application is working, it's just the proxy server that seems to apply the loging to each session (from http://icerap.limeo.com). # ************************************************************************ # Start of My stuff <<<------------------------------------------------------ # ************************************************************************ #<Proxy *> #Order Deny,Allow #Deny from all #Allow from 192.168.0 #</Proxy> # blog <VirtualHost *:80> ServerName icerap.limeo.com ProxyPass / http://192.168.0.9:9070/ ProxyPassReverse / http://192.168.0.9:9070/ </VirtualHost> # www <VirtualHost *:80> ServerName helpdesk.limeo.com ProxyPass / http://192.168.0.9:9055/ ProxyPassReverse / http://192.168.0.9:9055/ </VirtualHost> # blog <VirtualHost *:80> ServerName IceCake.limeo.com ProxyPass / http://192.168.0.9:9000/ ProxyPassReverse / http://192.168.0.9:9000/ </VirtualHost> # End of Limeo stuff <<<------------------------------------------------------ # ************************************************************************

    Read the article

  • Can I make TCP/IP session to run less than 60 seconds?

    - by Pavel
    Our server is overloaded with TCP/IP sessions, we have 1200 - 1500 of them. Most of them are hanging in TIME_OUT state. It turns out that a connection in TIME_OUT state occupies a socket until 60 second time-out is elapsed. The problem is that the server gets unresponsive and many clients are not getting served. I have made a simple test: download an XML file from the server with Internet Explorer 8.0 The download finishes in a fraction of second. But then I see that the TCP/IP connection is hanging in TIME_OUT state for 60 seconds. Is there any way to get rid of TIME_OUT waiting or make it less to free the socket for new connections? I understand why TCP/IP connection enters TIME_OUT state, but I don't understand why Internet Explorer does not close the connection after the XML file download is over. The details. Our server runs web service written in Perl (mod-perl). The service provides weather data to clients. Client is a Flash appication (actually Flash ActiveX control embedded in Windows application). Apache "Keep Alive" option is set to 0

    Read the article

  • Is there a Symfony callback at the termination of a session?

    - by Rob Wilkerson
    I have an application that is authenticating against an external server in a filter. In that filter, I'm trying to set a couple of session attributes on the user by using Symfony's setAttribute() method: $this->getContext()->getUser()->setAttribute( 'myAttribute', 'myValue' ); What I'm finding is that, if I dump $_SESSION immediately after setting the attribute. On the other hand, if I call getAttribute( 'myAttribute' ), I get back exactly what I put in. All along, I've assumed that reading/writing to user attributes was synonymous with reading/writing to the session, but that seems to be an incorrect assumption. Is there a timing issue? I'm not getting any non-object errors, so it seems that the user is fully initialized. Where is the disconnect here? Thanks. UPDATE The reason this was happening is because I had some code in myUser::shutdown() that cleared out a bunch of stuff. Because myUser is loosely equivalent to $_SESSION (at least with respect to attributes), I assumed that the shutdown() method would be called at the end of each session. It's not. It seems to get called at the close of each request which is why my attributes never seemed to get set. Now, though, I'm left wondering whether there's a session closing callback. Anyone know?

    Read the article

  • When do Symfony's user attributes get written to session?

    - by Rob Wilkerson
    I have a Symfony app that populates the "widgets" of a portal application and I'm noticing something (that seems) odd. The portal app has iframes that make calls to the Symfony app. On each of those calls, a random user key is passed on the query string. The Symfony app stores that key its session using myUser->setAttribute(). If the incoming value is different from what it has in session, it overwrites the session value. In pseudo-code (and applying a synchronous nature for clarity even though it may not exist): # Widget request arrives with ?foo=bar if the user attribute 'foo' does not equal 'bar' overwrite the user attribute 'foo' with 'bar' end What I'm noticing is that, on a portal page with multiple widgets (read: multiple requests coming in more or less simultaneously) where the value needs to be overwritten, each request is trying to overwrite. Is this a timing problem? When I look at the log prints, I'd expect the first request that arrives to overwrite and subsequent requests to see that the user attribute they received matches what was just put into cache by the initial request. In this scenario, it could be that subsequent requests begin (and are checked) even before the first one--the one that should overwrite the cached value--has completely finished. Are session values not really available to subsequent requests until one request has completed entirely or could there be something else that I'm missing? Thanks.

    Read the article

  • How are people using virtualisation with SQL Server? Part 2

    - by GavinPayneUK
    This is part two of an article reviewing the results of a virtualisation with SQL Server survey I performed. Part one can be found here. How do you size a new virtual server? When deploying a new virtual server you want to size it according to its predicted workload knowing that additional resource can be allocated as required. Unlike physical servers giving your virtual server more resource than it actually needs can actually be a bad thing, if nothing else if you’ve got resource you’re not using...(read more)

    Read the article

  • Stairway to Transaction Log Management in SQL Server, Level 6: Managing the Log in BULK_LOGGED Recovery Model

    A DBA may consider switching a database to the BULK_LOGGED recovery model in the short term during, for example, bulk load operations. When a database is operating in the BULK_LOGGED model these, and a few other operations such as index rebuilds, can be minimally logged and will therefore use much less space in the log NEW! Never waste another weekend deployingDeploy SQL Server changes and ASP .NET applications fast, frequently, and without fuss, using Deployment Manager, the new tool from Red Gate. Try it now.

    Read the article

  • filestream restore very slow to DR server

    - by Jim
    We are backing up a database containing 100GB of filestream data. The backup takes under two hours to write. Restoring the same database to the DR environment is taking over forty hours. We don't have the same problem with other (non-filestream) databases, including some that are much larger. How do we try and get to the bottom of the problem. The database is in full recovery mode, and we are doing a full backup. Thanks.

    Read the article

  • MS Access Premiere Products Exercise

    - by rynwtts
    I am working with Microsoft Access, Premiere Products Exercises for a college course. I can't seem to get past a specific question. We are working with DBDL and E-R Diagrams. The question is here. Indicate the changes you need to make to the design of the Premiere Products database to support the following situation. A customer is not necessarily represented by a single sales rep but can be represented by several sales reps. when a customer places an order, the sales rep who gets the commission on the order must be one of the collection of sales reps who represents the customer. In the database already each customer is represented by a sales rep. Which yields a one to one relationship. I need to enable a customer to have several sales reps, and make it so that only those sales rep will be eligible for commission upon each order.

    Read the article

  • DB2 insert performance - How to measure

    - by svrist
    [From stackoverflow] Im trying to find a way to speedup my inserts to a DB2 9.7.1 (ubuntu linux) Im watching vmstat and trying to gather some statistics via the db2 get snapshot commands but im not able to figure out which numbers im looking for to be able to see where the trouble is. I've read lits of stuff like http://www.eggheadcafe.com/software/aspnet/35692526/question-multiple-row-in.aspx, and http://www.ibm.com/developerworks/data/library/tips/dm-0403wilkins/ and tricks like ALTER TABLE lalala APPEND ON works somewhat (the difference between a dd if=/dev/zero and insert is still a factor 10) but I would like to be able to find the counters or other performance indicators that actually show why it makes sense to use those tricks. For example: What is the metric called that shows me that it is buffer pages allocation (FSCR stuff) that is the problem Where do I see that the insert time is hampered by clustered indexes? I find db2top very useful but im still searching for more direct view of "this is your bottleneck" methods

    Read the article

  • MemCached on Windows x64

    - by Django Reinhardt
    This question has previously been asked, but that was a year ago and I wanted to know if there had been any developments since then. Basically we'd like to use a MemCached Server on a Windows Server 2008 R2 machine... which is only x64, obviously. I haven't found any details on a Win64 version of MemCached, but there is still the solution from the previous thread (which I haven't tried yet) to use a bit of software called MemCacheD Manager running MemCached 1.2.6. However, the current version of MCd is 1.4.4 and I was wondering if there had been any improvements since then.

    Read the article

  • Spreadsheet RDBMS

    - by John Nilsson
    I'm looking for a software (or set of software) that will let me combine spreadsheet and database workflows. Data entry in spreadsheet to enable simple entry from clipboard, analysis based on joins, unions and aggregates and pivot/data pilot summaries. So far I've only found either spreadsheets OR db applications but no good combination. OO base with calc for tables doesn't support aggregates f.ex. Google Spreadsheet + Visualizaion API doesn't support unions or joins, zoho db doesn't let me paste from clipboard. Any hints on software that could be used? Basically I'm trying to do some analysis of my personal bank transactions. Problem 1, ETL. The data has to be moved from my bank to a database. My current solution is to manually copy and paste the data into one spread sheet per account from my internet bank. Pains: Not very scriptable. Lots of scrolling to reach the point to paste. Have to apply sorting and formatting to the pasted data each time. Problem 2, analysis. I then want to aggregate the different accounts in one sweep to track transfers per type of transfer over all accounts. The actual aggregation is still unsolved because I can't find a UNION equivalent in the spreadsheets I've tried.

    Read the article

  • many partitions on a single filegroup?¿ does it make sense?

    - by river0
    Hi, I'm designing a datawarehouse solution and I'm a newbie in disk configuration issues, let me explain you. Our storage is spread over 6 storage enlosures having each of them 5 raid-1 disk arrays, and having 2 LUNS defined per each disk array, which makes a total 48 LUNS (this is following Microsoft fast track recommendations for datawarehouse architectures). I would like to partition my data, on other projects I have worked before, we always followed a 1 partition - 1 filegroup rule. On the microsoft fast track recomendations it is advised to create a filegroup and then for that filegroup a data file per each lun... but I pretend to have a week level partitioning... if I apply that rule I think that I'll get too many files and a complex layout. I'm thinking of just creating just one filegroup (with the 48 lun data files), but still create the partitions since I want to keep soem of the benefits of partitions like partition switching... Is this scenario not recommended? What would you suggest?

    Read the article

  • How to find out where or if MYSQL5 logs are stored on a machine WHM/Cpanel

    - by moi
    I have a WHM/Cpanel re-seller hosting account on a virtual private server (Linux). I have root access to the machine via SSH I am trying to locate a file that contains information that will help me to determine which users have accessed what db and from which hosts. I would imagine this kind of data is stored in a log file somewhere. The MySQL page says: The general query log - Established client connections and statements received from clients See: http://dev.mysql.com/doc/refman/5.0/en/server-logs.html It also says: By default, all log files are created in the mysqld data directory. So, I am am NOT asking where are the general query log logs stored, (cos I expect I will get answers saying "it depends") Please help me work out: "How can go about finding out where MySQL general query log logs are stored on a linux machine" Couple of things i've already tried: I looked at /etc/my.cnf it was a tiny file that only contained the following info: [mysqld] skip-bdb skip-innodb set-variable = max_connections=500 safe-show-database ~ ~ I have looked in: /var/lib/mysql/ But I could not see any log-like file names in that directory. Any clues on this would be most welcome.

    Read the article

  • Automating an SSRS 2008 R2 Report Snapshots and run report with most recent data

    - by Mr Shoubs
    I would like to automate a report snapshot, but there is only an option to take a snapshot in the Report History Tab. All the resources I've found suggest I need to go to processing options and select "Render this report from a snapshot". But I don't want to do that - when I go to a report, I want to get the most recent data. However daily at midnight I'd like to take a snapshot and store it in the history in case I want to compare the reports as of midnight for the last few weeks. Or am I doing this wrong and have to create a subscription instead? Note: this is for an auditing database and has way to much data in to query a range with more than 1 day in it - reports are restricted as such. (1 day has over 1 million rows on it's own).

    Read the article

  • Can you authenticate into SSAS with AD LDS (ADAM) accounts?

    - by Jaxidian
    I'm very new to AD LDS and experienced but not qualified with SSAS, so my apologies for my ignorances with these. We have a couple implementations where we expose SSAS via an HTTPS proxy (msmdpump.dll) and currently we have a temporary domain setup handling this (where our end-users have a second account+creds to manage because of this = non-ideal). I want to move us towards a more permanent solution which I'm thinking of moving all authentication to AD LDS for our web apps, SSAS, and others. However, SSAS is where I'm concerned about this. I know SSAS requires Windows Authentication and to play nicely, and that this ultimately means Active Directory will be involved. Is there a way to get this done with AD LDS instead of having to use a full AD DS implementation? If so, how? (Note: My question over at StackOverflow had a suggestion that I post this question here on ServerFault instead. My apologies if I'm not asking in the right forum.)

    Read the article

  • problem connecting to datasource defined in freetds.conf

    - by pkaeding
    I can connect successfully to my database using tsql when I bypass the freetds.conf file, like so: % TDSVER=8.0 tsql -H 10.100.102.202 -p 1086 -U sa After I enter my password, I am presented with a 1> prompt, and it is ready for my commands. However, if I try to connect using the definition in my freetds.conf file, like this: % tsql -S Millie -U sa after entering my password, it seems to be trying to generate a prompt, but it just keeps counting. I will see 1, followed by 2, etc, without ever displaying a > character. Here is what I have for my freetds.conf: [global] # TDS protocol version tds version = 8.0 text size = 64512 [Millie] host = 10.100.102.202 port = 1086 What could be causing this anomaly? If it helps, here is the output of tsql -C: % tsql -C Compile-time settings (established with the "configure" script) Version: freetds v0.82 freetds.conf directory: /usr/local/etc MS db-lib source compatibility: no Sybase binary compatibility: no Thread safety: yes iconv library: yes TDS version: 5.0 iODBC: no unixodbc: no

    Read the article

  • What proportion of DBA skills are platform-independent?

    - by Arkaaito
    Problem: we've grown to the point where we need a real DBA. Real DBAs are hard to find. Possible solution: I know someone with extensive experience (currently working on MSSQL) who is looking for a new DBA job. He might be willing to consider working for a MySQL shop (I haven't asked). Snag: I don't know to what extent MSSQL DBA skills map to MySQL DBA skills. I'm a developer, so I know enough about MySQL to develop apps which use it (including the basic performance-tuning, index selection, etc.), do schema design, and perform simple utility tasks (backup with mysqldump, scripting, etc.). I don't know anything about MSSQL. Nor do I actually know much about the boundaries of the DBA role. Can anyone with more experience - perhaps as a DBA - weigh in on this? Are there enough similarities between MSSQL and MySQL that it's worth asking a MSSQL DBA if he'd be interested in applying for a MySQL position?

    Read the article

  • MSSQL 2008 License for both Web application and desktop application

    - by Bayonian
    I have ASP.NET web application using MSSQL express at the moment. But I want to use MSSQL 2008. But I'm NOT sure about what kind of license I should buy. I'm considering the Processor License according to this document. I'm not sure if it's the right choice. If I buy User CAL. should I buy only 1 CAL for my web application? or for all visitors who visit my web site? I also have a Windows desktop application that write/read data from the server. Do I need a seperate license with for this Windows application if I buy Processor License. Thank you for suggestion.

    Read the article

< Previous Page | 602 603 604 605 606 607 608 609 610 611 612 613  | Next Page >