Search Results

Search found 15301 results on 613 pages for 'global assembly cache'.

Page 568/613 | < Previous Page | 564 565 566 567 568 569 570 571 572 573 574 575  | Next Page >

  • how to fix protocol violation in c#

    - by Jeremy Styers
    I have a c# "client" and a Java "server". The java server has a wsdl it serves to the client. So far it works for c# to make a request for the server to perform a soap action. My server gets the soap request executes the method and tries to return the result back to the client. When I send the response to c# however, I get "The server committed a protocol violation. Section=ResponseStatusLine". I have spent all day trying to fix this and have come up with nothing that works. If I explain what i did, this post would be very long, so I'll keep it brief. i Googled for hours and everything tells me my "response line" is correct. I tried shutting down Skype, rearranging the response line, adding things, taking things away, etc, etc. All to no avail. This is for a class assignment so no, I can not use apis to help. I must do everything manually on the server side. That means parsing by hand, creating the soap response and the http response by hand. Just thought you'd like to know that before you say to use something that does it for me. I even tried making sure my server was sending the correct header by creating a java client that "mimicked" the c# one so I could see what the server returned. However, it's returning exactly what i told it to send. I tried telling my java client to do the same thing but to an actuall running c# service, to see what a real service returns, and it returned basically the same thing. To be safe, I copied it's response and tried sending it to the c# client and it still threw the error. Can anyone help? I've tried all i can think of, including adding the useUnsafeHeaderParsing to my app config. Nothing is working though. I send it exactly what a real service sends it and it yells at me. I send it what i want and it yells. I'm sending this: "200 OK HTTP/1.0\r\n" + "Content-Length: 201\r\n" + "Cache-Control: private\r\n" + "Content-Type: text/xml; charset=utf-8\r\n\r\n";

    Read the article

  • Emacs, C++ code completion for vectors

    - by Caglar Toklu
    Hi, I am new to Emacs, and I have the following code as a sample. I have installed GNU Emacs 23.1.1 (i386-mingw-nt6.1.7600), installed cedet-1.0pre7.tar.gz. , installed ELPA, and company. You can find my simple Emacs configuration at the bottom. The problem is, when I type q[0] in main() and press . (dot), I see the 37 members of the vector, not Person although first_name and last_name are expected. The completion works as expected in the function greet() but it has nothing to do with vector. My question is, how can I accomplish code completion for vector elements too? #include <iostream> #include <vector> using namespace std; class Person { public: string first_name; string last_name; }; void greet(Person a_person) { // a_person.first_name is completed as expected! cout << a_person.first_name << "|"; cout << a_person.last_name << endl; }; int main() { vector<Person> q(2); Person guy1; guy1.first_name = "foo"; guy1.last_name = "bar"; Person guy2; guy2.first_name = "stack"; guy2.last_name = "overflow"; q[0] = guy1; q[1] = guy2; greet(guy1); greet(guy2); // cout q[0]. I want to see first_name or last_name here! } My Emacs configuration: ;;; This was installed by package-install.el. ;;; This provides support for the package system and ;;; interfacing with ELPA, the package archive. ;;; Move this code earlier if you want to reference ;;; packages in your .emacs. (when (load (expand-file-name "~/.emacs.d/elpa/package.el")) (package-initialize)) (load-file "~/.emacs.d/cedet/common/cedet.el") (semantic-load-enable-excessive-code-helpers) (require 'semantic-ia) (global-srecode-minor-mode 1) (semantic-add-system-include "/gcc/include/c++/4.4.2" 'c++-mode) (semantic-add-system-include "/gcc/i386-pc-mingw32/include" 'c++-mode) (semantic-add-system-include "/gcc/include" 'c++-mode) (defun my-semantic-hook () (imenu-add-to-menubar "TAGS")) (add-hook 'semantic-init-hooks 'my-semantic-hook)

    Read the article

  • Wordpress pages address rewrite

    - by kemp
    UPDATE I tried using the internal wordpress rewrite. What I have to do is an address like this: http://example.com/galleria/artist-name sent to the gallery.php page with a variable containing the artist-name. I used these rules as per Wordpress' documentation: // REWRITE RULES (per gallery) {{{ add_filter('rewrite_rules_array','wp_insertMyRewriteRules'); add_filter('query_vars','wp_insertMyRewriteQueryVars'); add_filter('init','flushRules'); // Remember to flush_rules() when adding rules function flushRules(){ global $wp_rewrite; $wp_rewrite->flush_rules(); } // Adding a new rule function wp_insertMyRewriteRules($rules) { $newrules = array(); $newrules['(galleria)/(.*)$'] = 'index.php?pagename=gallery&galleryname=$matches[2]'; return $newrules + $rules; } // Adding the id var so that WP recognizes it function wp_insertMyRewriteQueryVars($vars) { array_push($vars, 'galleryname'); return $vars; } what's weird now is that on my local wordpress test install, that works fine: the gallery page is called and the galleryname variable is passed. On the real site, on the other hand, the initial URL is accepted (as in it doesn't go into a 404) BUT it changes to http://example.com/gallery (I mean it actually changes in the browser's address bar) and the variable is not defined in gallery.php. Any idea what could possibly cause this different behavior? Alternatively, any other way I couldn't think of which could achieve the same effect described in the first three lines is perfectly fine. Old question What I need to do is rewriting this address: (1) http://localhost/wordpress/fake/text-value to (2) http://localhost/wordpress/gallery?somevar=text-value Notes: the remapping must be transparent: the user always has to see address (1) gallery is a permalink to a wordpress page, not a real address I basically need to rewrite the address first (to modify it) and then feed it back to mod rewrite again (to let wordpress parse it its own way). Problems if I simply do RewriteRule ^fake$ http://localhost/wordpress/gallery [L] it works but the address in the browser changes, which is no good, if I do RewriteRule ^fake$ /wordpress/gallery [L] I get a 404. I tried different flags instead of [L] but to no avail. How can I get this to work? EDIT: full .htaccess # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteRule ^fake$ /wordpress/gallery [R] RewriteBase /wordpress/ RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /wordpress/index.php [L] </IfModule> # END WordPress

    Read the article

  • Helping linqtosql datacontext use implicit conversion between varchar column in the database and tab

    - by user213256
    I am creating an mssql database table, "Orders", that will contain a varchar(50) field, "Value" containing a string that represents a slightly complex data type, "OrderValue". I am using a linqtosql datacontext class, which automatically types the "Value" column as a string. I gave the "OrderValue" class implicit conversion operators to and from a string, so I can easily use implicit conversion with the linqtosql classes like this: // get an order from the orders table MyDataContext db = new MyDataContext(); Order order = db.Orders(o => o.id == 1); // use implicit converstion to turn the string representation of the order // value into the complex data type. OrderValue value = order.Value; // adjust one of the fields in the complex data type value.Shipping += 10; // use implicit conversion to store the string representation of the complex // data type back in the linqtosql order object order.Value = value; // save changes db.SubmitChanges(); However, I would really like to be able to tell the linqtosql class to type this field as "OrderValue" rather than as "string". Then I would be able to avoid complex code and re-write the above as: // get an order from the orders table MyDataContext db = new MyDataContext(); Order order = db.Orders(o => o.id == 1); // The Value field is already typed as the "OrderValue" type rather than as string. // When a string value was read from the database table, it was implicity converted // to "OrderValue" type. order.Value.Shipping += 10; // save changes db.SubmitChanges(); In order to achieve this desired goal, I looked at the datacontext designer and selected the "Value" field of the "Order" table. Then, in properties, I changed "Type" to "global::MyApplication.OrderValue". The "Server Data Type" property was left as "VarChar(50) NOT NULL" The project built without errors. However, when reading from the database table, I was presented with the following error message: Could not convert from type 'System.String' to type 'MyApplication.OrderValue'. at System.Data.Linq.DBConvert.ChangeType(Object value, Type type) at Read_Order(ObjectMaterializer1 ) at System.Data.Linq.SqlClient.ObjectReaderCompiler.ObjectReader2.MoveNext() at System.Linq.Buffer1..ctor(IEnumerable1 source) at System.Linq.Enumerable.ToArray[TSource](IEnumerable`1 source) at Example.OrdersProvider.GetOrders() at ... etc From the stack trace, I believe this error is happening while reading the data from the table. When presented with converting a string to my custom data type, even though the implicit conversion operators are present, the DBConvert class gets confused and throws an error. Is there anything I can do to help it not get confused and do the implicit conversion? Thanks in advance, and apologies if I have posted in the wrong forum. cheers / Ben

    Read the article

  • Linux configurations that would affect Java memory usage?

    - by wmacura
    Hi, Background: I have a set of java background workers I start as part of my webapp. I develop locally on Ubuntu 10.10 and deploy to an Ubuntu 10.04LTS server (a media temple (ve) instance). They're both running the same JVM: Sun JVM 1.6.0_22-b04. As part of the initialization script each worker is started with explicit Xmx, Xms, and XX:MaxPermGen settings. Yet somehow locally all 10 workers use 250MB, while on the server they use more than 2.7GB. I don't know how to begin to track this down. I thought the Ubuntu (and thus, kernel) version might make a difference, but I tried an old 10.04 VM and it behaves as expected. I've noticed that the machine does not seem to ever use memory for buffer or cache (according to htop), which seems a bit strange, but perhaps normal for a server? (edited) Some info: (server) root@devel:/app/axir/target# uname -a Linux devel 2.6.18-028stab069.5 #1 SMP Tue May 18 17:26:16 MSD 2010 x86_64 GNU/Linux (local) wiktor@beastie:~$ uname -a Linux beastie 2.6.35-25-generic #44-Ubuntu SMP Fri Jan 21 17:40:44 UTC 2011 x86_64 GNU/Linux (edited) Comparing PS output: (ps -eo "ppid,pid,cmd,rss,sz,vsz") PPID PID CMD RSS SZ VSZ (local) 1588 1615 java -cp axir-distribution. 25484 234382 937528 1615 1631 java -cp /home/wiktor/Code/ 83472 163059 652236 1615 1657 java -cp /home/wiktor/Code/ 70624 89135 356540 1615 1658 java -cp /home/wiktor/Code/ 37652 77625 310500 1615 1669 java -cp /home/wiktor/Code/ 38096 77733 310932 1615 1675 java -cp /home/wiktor/Code/ 37420 61395 245580 1615 1684 java -cp /home/wiktor/Code/ 38000 77736 310944 1615 1703 java -cp /home/wiktor/Code/ 39180 78060 312240 1615 1712 java -cp /home/wiktor/Code/ 38488 93882 375528 1615 1719 java -cp /home/wiktor/Code/ 38312 77874 311496 1615 1726 java -cp /home/wiktor/Code/ 38656 77958 311832 1615 1727 java -cp /home/wiktor/Code/ 78016 89429 357716 (server) 22522 23560 java -cp axir-distribution. 24860 285196 1140784 23560 23585 java -cp /app/axir/target/a 100764 161629 646516 23560 23667 java -cp /app/axir/target/a 72408 92682 370728 23560 23670 java -cp /app/axir/target/a 39948 97671 390684 23560 23674 java -cp /app/axir/target/a 40140 81586 326344 23560 23739 java -cp /app/axir/target/a 39688 81542 326168 They look very similar. In fact, the question now is why, if I add up the virtual memory usage on the server (3.2GB) does it more closely reflect 2.4GB of memory used (according to free), yet locally the virtual memory used adds up to a much more substantial 4.7GB but only actually uses ~250MB. It seems that perhaps memory isn't being shared as aggressively. (if that's even possible) Thank you for your help, Wiktor

    Read the article

  • How to created filtered reports in WPF?

    - by Michael Goyote
    Creating reports in WPF. I have two related tables. Table A-Customer: CustomerID(PK) Names Phone Number Customer Num Table B-Items: Products Price CustomerID I want to be able to generate a report like this: CustomerA Items Price Item A 10 Item B 10 Item C 10 --------------- Total 30 So this is what I have done: <Window x:Class="ReportViewerWPF.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:rv="clr-namespace:Microsoft.Reporting.WinForms; assembly=Microsoft.ReportViewer.WinForms" Title="Customer Report" Height="300" Width="400"> <Grid> <WindowsFormsHost Name="windowsFormsHost1"> <rv:ReportViewer x:Name="reportViewer1"/> </WindowsFormsHost> </Grid> Then I created a dataset and loaded the two tables, followed by a report wizard (dragged all the available fields and dropped them to the Values pane). The code behind the WPF window is this: public partial class CustomerReport : Window { public CustomerReport() { InitializeComponent(); _reportViewer.Load += ReportViewer_Load; } private bool _isReportViewerLoaded; private void ReportViewer_Load(object sender, EventArgs e) { if (!_isReportViewerLoaded) { Microsoft.Reporting.WinForms.ReportDataSource reportDataSource1 = new Microsoft.Reporting.WinForms.ReportDataSource(); HM2DataSet dataset = new HM2DataSet(); dataset.BeginInit(); reportDataSource1.Name = "DataSet";//This is the dataset name reportDataSource1.Value = dataset.CustomerTable; this.reportViewer1.LocalReport.DataSources.Add(reportDataSource1); this.reportViewer1.LocalReport.ReportPath = "../../Report3.rdlc"; dataset.EndInit(); HM2DataSetTableAdapters.CustomerTableAdapter funcTableAdapter = new HM2DataSetTableAdapters.CustomerTableAdapter(); funcTableAdapter.ClearBeforeFill = true; funcTableAdapter.Fill(dataset.CustomerTable); _reportViewer.RefreshReport(); _isReportViewerLoaded = true; } } As you might have guessed this loaded this list of customer with items and price: Customer Items Price Customer A Items A 10 Customer A Items B 10 Customer B Items D 10 Customer B Items C 10 How can I fine-tune this report to look like the one above, where the user can filter the customer he wants displayed on the report? Thanks in advance for the help. I would have preferred to use LINQ whenever filtering data

    Read the article

  • IPhone Development Profile Expired

    - by theiphoneguy
    I really combed this site and others. I read and re-read the related links here and the Apple docs. I'm sorry, but either I am obviously missing something right under my nose, or this Apple profile/certificate stuff is a bit convoluted. Here it is: I have a product in the App Store. I have updated it several times and users like it. My development profile recently expired just when I was improving the app for its next release. I can run the app in the simulator. I can compile and put the distribution build on my iPhone just fine. I went to the Apple portal and renewed the development profile. I downloaded it and installed it in Xcode. I see it in the Organize window. I see it on my iPhone. I CANNOT put the debug build on my iPhone to debug or run with Instruments. The message is that either there is not a valid signed profile or it is untrusted. I subsequently tried to download and install the certificate to my Mac's keychain. Still no success. I checked the code signing section of Project settings and also for the target and the root. All appears to indicate that it is using the expected development profile for debug. Yes, I had deleted the old profile from my iPhone, from the Organizer. I cleaned the Xcode cache and all targets. I have done all of this several times and in varying sequences to try to cover every possibility. I am ready to do anything to be able to debug with Instruments in order to check for leaks or high memory usage. Even though the distribution compile runs fine on my iPhone and plays well with other running processes, I will not release anything without a leaks/memory test. Any ideas will be appreciated. If I missed something obvious, please forgive me - it was not due to just posting a question without searching for similar postings. Thanks!

    Read the article

  • zlib gzgets extremely slow?

    - by monkeyking
    I'm doing stuff related to parsing huge globs of textfiles, and was testing what input method to use. There is not much of a difference using c++ std::ifstreams vs c FILE, According to the documentation of zlib, it supports uncompressed files, and will read the file without decompression. I'm seeing a difference from 12 seconds using non zlib to more than 4 minutes using zlib.h This I've tested doing multiple runs, so its not a disk cache issue. Am I using zlib in some wrong way? thanks #include <zlib.h> #include <cstdio> #include <cstdlib> #include <fstream> #define LENS 1000000 size_t fg(const char *fname){ fprintf(stderr,"\t-> using fgets\n"); FILE *fp =fopen(fname,"r"); size_t nLines =0; char *buffer = new char[LENS]; while(NULL!=fgets(buffer,LENS,fp)) nLines++; fprintf(stderr,"%lu\n",nLines); return nLines; } size_t is(const char *fname){ fprintf(stderr,"\t-> using ifstream\n"); std::ifstream is(fname,std::ios::in); size_t nLines =0; char *buffer = new char[LENS]; while(is. getline(buffer,LENS)) nLines++; fprintf(stderr,"%lu\n",nLines); return nLines; } size_t iz(const char *fname){ fprintf(stderr,"\t-> using zlib\n"); gzFile fp =gzopen(fname,"r"); size_t nLines =0; char *buffer = new char[LENS]; while(0!=gzgets(fp,buffer,LENS)) nLines++; fprintf(stderr,"%lu\n",nLines); return nLines; } int main(int argc,char**argv){ if(atoi(argv[2])==0) fg(argv[1]); if(atoi(argv[2])==1) is(argv[1]); if(atoi(argv[2])==2) iz(argv[1]); }

    Read the article

  • Asp.net Mvc - Kigg: Maintain User object in HttpContext.Items between requests.

    - by Pickels
    Hallo, first I want to say that I hope this doesn't look like I am lazy but I have some trouble understanding a piece of code from the following project. http://kigg.codeplex.com/ I was going through the source code and I noticed something that would be usefull for my own little project I am making. In their BaseController they have the following code: private static readonly Type CurrentUserKey = typeof(IUser); public IUser CurrentUser { get { if (!string.IsNullOrEmpty(CurrentUserName)) { IUser user = HttpContext.Items[CurrentUserKey] as IUser; if (user == null) { user = AccountRepository.FindByClaim(CurrentUserName); if (user != null) { HttpContext.Items[CurrentUserKey] = user; } } return user; } return null; } } This isn't an exact copy of the code I adjusted it a little to my needs. This part of the code I still understand. They store their IUser in HttpContext.Items. I guess they do it so that they don't have to call the database eachtime they need the User object. The part that I don't understand is how they maintain this object in between requests. If I understand correctly the HttpContext.Items is a per request cache storage. So after some more digging I found the following code. internal static IDictionary<UnityPerWebRequestLifetimeManager, object> GetInstances(HttpContextBase httpContext) { IDictionary<UnityPerWebRequestLifetimeManager, object> instances; if (httpContext.Items.Contains(Key)) { instances = (IDictionary<UnityPerWebRequestLifetimeManager, object>) httpContext.Items[Key]; } else { lock (httpContext.Items) { if (httpContext.Items.Contains(Key)) { instances = (IDictionary<UnityPerWebRequestLifetimeManager, object>) httpContext.Items[Key]; } else { instances = new Dictionary<UnityPerWebRequestLifetimeManager, object>(); httpContext.Items.Add(Key, instances); } } } return instances; } This is the part where some magic happens that I don't understand. I think they use Unity to do some dependency injection on each request? In my project I am using Ninject and I am wondering how I can get the same result. I guess InRequestScope in Ninject is the same as UnityPerWebRequestLifetimeManager? I am also wondering which class/method they are binding to which interface? Since the HttpContext.Items get destroyed each request how do they prevent losing their user object? Anyway it's kinda a long question so I am gradefull for any push in the right direction. Kind regards, Pickels

    Read the article

  • SQL Native Client 10 Performance miserable (due to server-side cursors)

    - by namezero
    we have an application that uses ODBC via CDatabase/CRecordset in MFC (VS2010). We have two backends implemented. MSSQL and MySQL. Now, when we use MSSQL (with the Native Client 10.0), retrieving records with SELECT is dramatically slow via slow links (VPN, for example). The MySQL ODBC driver does not exhibit this nasty behavior. For example: CRecordset r(&m_db); r.Open(CRecordset::snapshot, L"SELECT a.something, b.sthelse FROM TableA AS a LEFT JOIN TableB AS b ON a.ID=b.Ref"); r.MoveFirst(); while(!r.IsEOF()) { // Retrieve CString strData; crs.GetFieldValue(L"a.something", strData); crs.MoveNext(); } Now, with the MySQL driver, everything runs as it should. The query is returned, and everything is lightning fast. However, with the MSSQL Native Client, things slow down, because on every MoveNext(), the driver communicates with the server. I think it is due to server-side cursors, but I didn't find a way to disable them. I have tried using: ::SQLSetConnectAttr(m_db.m_hdbc, SQL_ATTR_ODBC_CURSORS, SQL_CUR_USE_ODBC, SQL_IS_INTEGER); But this didn't help either. There are still long-running exec's to sp_cursorfetch() et al in SQL Profiler. I have also tried a small reference project with SQLAPI and bulk fetch, but that hangs in FetchNext() for a long time, too (even if there is only one record in the resultset). This however only happens on queries with LEFT JOINS, table-valued functions, etc. Note that the query doesn't take that long - executing the same SQL via SQL Studio over the same connection returns in a reasonable time. Question1: Is is possible to somehow get the native client to "cache" all results locally use local cursors in a similar fashion as the MySQL driver seems to do it? Maybe this is the wrong approach altogether, but I'm not sure how else to do this. All we want is to retrieve all data at once from a SELECT, then never talk the server again until the next query. We don't care about recordset updates, deletes, etc or any of that nonsense. We only want to retrieve data. We take that recordset, get all the data, and delete it. Question2: Is there a more efficient way to just retrieve data in MFC with ODBC?

    Read the article

  • How to populate GridView if Internet not available but images already cached to SD Card

    - by Sophie
    Hello I am writing an Application in which i am parsing JSON Images and then caching into SD Card. What I want to do ? I want to load images into GridView from JSON (by caching images into SD Card), and wanna populate GridView (no matter Internet available or not) once images already downloaded into SD Card. What I am getting ? I am able to cache images into SD Card, also to populate GridView, but not able to show images into GridView (if Internet not available) but images cached into SD Card @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { myGridView = inflater.inflate(R.layout.fragment_church_grid, container, false); if (isNetworkAvailable()) { new DownloadJSON().execute(); } else { Toast.makeText(getActivity(), "Internet not available !", Toast.LENGTH_LONG).show(); } return myGridView ; } private boolean isNetworkAvailable() { ConnectivityManager cm = (ConnectivityManager) getActivity().getSystemService(Context.CONNECTIVITY_SERVICE); NetworkInfo info = cm.getActiveNetworkInfo(); return (info != null); } // DownloadJSON AsyncTask private class DownloadJSON extends AsyncTask<Void, Void, Void> { @Override protected void onPreExecute() { super.onPreExecute(); // Create a progressdialog mProgressDialog = new ProgressDialog(getActivity()); // Set progressdialog title mProgressDialog.setTitle("Church Application"); // Set progressdialog message mProgressDialog.setMessage("Loading Images..."); mProgressDialog.setIndeterminate(false); // Show progressdialog mProgressDialog.show(); } @Override protected Void doInBackground(Void... params) { // Create an array arraylist = new ArrayList<HashMap<String, String>>(); // Retrieve JSON Objects from the given URL address jsonobject = JSONfunctions .getJSONfromURL("http://snapoodle.com/APIS/android/feed.php"); try { // Locate the array name in JSON jsonarray = jsonobject.getJSONArray("print"); for (int i = 0; i < jsonarray.length(); i++) { HashMap<String, String> map = new HashMap<String, String>(); jsonobject = jsonarray.getJSONObject(i); // Retrive JSON Objects map.put("saved_location", jsonobject.getString("saved_location")); // Set the JSON Objects into the array arraylist.add(map); } } catch (JSONException e) { Log.e("Error", e.getMessage()); e.printStackTrace(); } return null; } @Override protected void onPostExecute(Void args) { // Locate the listview in listview_main.xml listview = (GridView) shriRamView.findViewById(R.id.listview); // Pass the results into ListViewAdapter.java adapter = new ChurchImagesAdapter(getActivity(), arraylist); // Set the adapter to the ListView listview.setAdapter(adapter); // Close the progressdialog mProgressDialog.dismiss(); } } }

    Read the article

  • How to create a simple Proxy to access web servers in C

    - by jesusiniesta
    Hi. I’m trying to create an small Web Proxy in C. First, I’m trying to get a webpage, sending a GET frame to the server. I don’t know what I have missed, but I am not receiving any response. I would really appreciate if you can help me to find what is missing in this code. int main (int argc, char** argv) { int cache_size, //size of the cache in KiB port, port_google = 80, dir, mySocket, socket_google; char google[] = "www.google.es", ip[16]; struct sockaddr_in socketAddr; char buffer[10000000]; if (GetParameters(argc,argv,&cache_size,&port) != 0) return -1; GetIP (google, ip); printf("ip2 = %s\n",ip); dir = inet_addr (ip); printf("ip3 = %i\n",dir); /* Creation of a socket with Google */ socket_google = conectClient (port_google, dir, &socketAddr); if (socket_google < 0) return -1; else printf("Socket created\n"); sprintf(buffer,"GET /index.html HTTP/1.1\r\n\r\n"); if (write(socket_google, (void*)buffer, LONGITUD_MSJ+1) < 0 ) return 1; else printf("GET frame sent\n"); strcpy(buffer,"\n"); read(socket_google, buffer, sizeof(buffer)); // strcpy(message,buffer); printf("%s\n", buffer); return 0; } And this is the code I use to create the socket. I think this part is OK, but I copy it just in case. int conectClient (int puerto, int direccion, struct sockaddr_in *socketAddr) { int mySocket; char error[1000]; if ( (mySocket = socket(AF_INET, SOCK_STREAM, 0)) == -1) { printf("Error when creating the socket\n"); return -2; } socketAddr->sin_family = AF_INET; socketAddr->sin_addr.s_addr = direccion; socketAddr->sin_port = htons(puerto); if (connect (mySocket, (struct sockaddr *)socketAddr,sizeof (*socketAddr)) == -1) { snprintf(error, sizeof(error), "Error in %s:%d\n", __FILE__, __LINE__); perror(error); printf("%s\n",error); printf ("-- Error when stablishing a connection\n"); return -1; } return mySocket; } Thanks!

    Read the article

  • C#/.NET Project - Am I setting things up correctly?

    - by JustLooking
    1st solution located: \Common\Controls\Controls.sln and its project: \Common\Controls\Common.Controls\Common.Controls.csproj Description: This is a library that contains this class: public abstract class OurUserControl : UserControl { // Variables and other getters/setters common to our UserControls } 2nd solution located: \AControl\AControl.sln and its project: \AControl\AControl\AControl.csproj Description: Of the many forms/classes, it will contain this class: using Common.Controls; namespace AControl { public partial class AControl : OurUserControl { // The implementation } } A note about adding references (not sure if this is relevant): When I add references (for projects I create), using the names above: 1. I add Common.Controls.csproj to AControl.sln 2. In AControl.sln I turn off the build of Common.Controls.csproj 3. I add the reference to Common.Controls (by project) to AControl.csproj. This is the (easiest) way I know how to get Debug versions to match Debug References, and Release versions to match Release References. Now, here is where the issue lies (the 3rd solution/project that actually utilizes the UserControl): 3rd solution located: \MainProj\MainProj.sln and its project: \MainProj\MainProj\MainProj.csproj Description: Here's a sample function in one of the classes: private void TestMethod<T>() where T : Common.Controls.OurUserControl, new() { T TheObject = new T(); TheObject.OneOfTheSetters = something; TheObject.AnotherOfTheSetters = something_else; // Do stuff with the object } We might call this function like so: private void AnotherMethod() { TestMethod<AControl.AControl>(); } This builds, runs, and works. No problem. The odd thing is after I close the project/solution and re-open it, I have red squigglies everywhere. I bring up my error list and I see tons of errors (anything that deals with AControl will be noted as an error). I'll see errors such as: The type 'AControl.AControl' cannot be used as type parameter 'T' in the generic type or method 'MainProj.MainClass.TestMethod()'. There is no implicit reference conversion from 'AControl.AControl' to 'Common.Controls.OurUserControl'. or inside the actual method (the properties located in the abstract class): 'AControl.AControl' does not contain a definition for 'OneOfTheSetters' and no extension method 'OneOfTheSetters' accepting a first argument of type 'AControl.AControl' could be found (are you missing a using directive or an assembly reference?) Meanwhile, I can still build and run the project (then the red squigglies go away until I re-open the project, or close/re-open the file). It seems to me that I might be setting up the projects incorrectly. Thoughts?

    Read the article

  • One Controller is Sometimes Bound Twice with Ninject

    - by Dusda
    I have the following NinjectModule, where we bind our repositories and business objects: /// <summary> /// Used by Ninject to bind interface contracts to concrete types. /// </summary> public class ServiceModule : NinjectModule { /// <summary> /// Loads this instance. /// </summary> public override void Load() { //bindings here. //Bind<IMyInterface>().To<MyImplementation>(); Bind<IUserRepository>().To<SqlUserRepository>(); Bind<IHomeRepository>().To<SqlHomeRepository>(); Bind<IPhotoRepository>().To<SqlPhotoRepository>(); //and so on //business objects Bind<IUser>().To<Data.User>(); Bind<IHome>().To<Data.Home>(); Bind<IPhoto>().To<Data.Photo>(); //and so on } } And here are the relevant overrides from our Global.asax, where we inherit from NinjectHttpApplication in order to integrate it with Asp.Net Mvc (The module lies in a separate dll called Thing.Web.Configuration): protected override void OnApplicationStarted() { base.OnApplicationStarted(); //routes and areas AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes); //Initializes a singleton that must reference this HttpApplication class, //in order to provide the Ninject Kernel to the rest of Thing.Web. This //is necessary because there are a few instances (currently Membership) //that require manual dependency injection. NinjectKernel.Instance = new NinjectKernel(this); //view model factory. NinjectKernel.Instance.Kernel.Bind<IModelFactory>().To<MasterModelFactory>(); } protected override NinjectControllerFactory CreateControllerFactory() { return base.CreateControllerFactory(); } protected override Ninject.IKernel CreateKernel() { var kernel = new StandardKernel(); kernel.Load("Thing.Web.Configuration.dll"); return kernel; } Now, everything works great, with one exception: For some reason, sometimes Ninject will bind the PhotoController twice. This leads to an ActivationException, because Ninject can't discern which PhotoController I want. This causes all requests for thumbnails and other user images on the site to fail. Here is the PhotoController in it's entirety: public class PhotoController : Controller { public PhotoController() { } public ActionResult Index(string id) { var dir = Server.MapPath("~/" + ConfigurationManager.AppSettings["UserPhotos"]); var path = Path.Combine(dir, id); return base.File(path, "image/jpeg"); } } Every controller works in exactly the same way, but for some reason the PhotoController gets double-bound. Even then, it only happens occasionally (either when re-building the solution, or on staging/production when the app pool kicks in). Once this happens, it continues to happen until I redeploy without changing anything. So...what's up with that?

    Read the article

  • CSS selectors : should I minimise my use of the class attribute in the HTML or optimise the speed

    - by Laurent Bourgault-Roy
    As I was working on a small website, I decided to use the PageSpeed extension to check if their was some improvement I could do to make the site load faster. However I was quite surprise when it told me that my use of CSS selector was "inefficient". I was always told that you should keep the usage of the class attribute in the HTML to a minimum, but if I understand correctly what PageSpeed tell me, it's much more efficient for the browser to match directly against a class name. It make sense to me, but it also mean that I need to put more CSS classes in my HTML. It also make my .css file a little harder to read. I usually tend to mark my CSS like this : #mainContent p.productDescription em.priceTag { ... } Which make it easy to read : I know this will affect the main content and that it affect something in a paragraph tag (so I wont start to put all sort of layout code in it) that describe a product and its something that need emphasis. However it seem I should rewrite it as .priceTag { ... } Which remove all context information about the style. And if I want to use differently formatted price tag (for example, one in a list on the sidebar and one in a paragraph), I need to use something like that .paragraphPriceTag { ... } .listPriceTag { ... } Which really annoy me since I seem to duplicate the semantic of the HTML in my classes. And that mean I can't put common style in an unqualified .priceTag { ... } and thus I need to replicate the style in both CSS rule, making it harder to make change. (Altough for that I could use multiple class selector, but IE6 dont support them) I believe making code harder to read for the sake of speed has never been really considered a very good practice . Except where it is critical, of course. This is why people use PHP/Ruby/C# etc. instead of C/assembly to code their site. It's easier to write and debug. So I was wondering if I should stick with few CSS classes and complex selector or if I should go the optimisation route and remove my fancy CSS selectors for the sake of speed? Does PageSpeed make over the top recommandation? On most modern computer, will it even make a difference?

    Read the article

  • Memory Leak with jQuery UI inside UpdatePanel in ie7

    - by Ryan
    I have a fairly complex asp.net page based on UpdatePanels and jQuery UI. Unfortunately, when the panels update, the jQuery UI widgets leak memory like crazy in ie7, even if I manually 'destroy' them. Does anyone know a technique/patch to prevent these leaks? I've created a simple example page with a slider inside an UpdatePanel. Just click the 'Leak' button and refresh the page to see the leak in sieve. <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Leak2.aspx.cs" Inherits="Leak2" %> <%@ Register Assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" Namespace="System.Web.UI" TagPrefix="asp" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Leak</title> <link type="text/css" href="/jquery/css/custom-theme/jquery-ui-1.7.2.custom.css" rel="Stylesheet" /> <script type="text/javascript" src="/jquery/js/jquery-1.3.2.min.js"></script> <script type="text/javascript" src="/jquery/js/jquery-ui-1.7.2.custom.min.js"></script> </head> <body> <form id="form1" runat="server"> <div> <script type="text/javascript"> function initializeSlider() { $(".slider").slider({ min: 0, max: 100, value: 100, step: 5 }); } $(document).ready(function() { initializeSlider(); Sys.WebForms.PageRequestManager.getInstance().add_endRequest(function() { initializeSlider(); }); }); </script> <asp:ScriptManager ID="ScriptManager1" runat="server"></asp:ScriptManager> <asp:UpdatePanel ID="UpdatePanel1" runat="server"> <ContentTemplate> <div style="width: 300px;"> <div class="slider"></div> <asp:Button ID="Button1" runat="server" OnClick="Button1_Click" Text="Leak" /> </div> </ContentTemplate> </asp:UpdatePanel> </div> </form> </body> </html>

    Read the article

  • Combining FileStream and MemoryStream to avoid disk accesses/paging while receiving gigabytes of data?

    - by w128
    I'm receiving a file as a stream of byte[] data packets (total size isn't known in advance) that I need to store somewhere before processing it immediately after it's been received (I can't do the processing on the fly). Total received file size can vary from as small as 10 KB to over 4 GB. One option for storing the received data is to use a MemoryStream, i.e. a sequence of MemoryStream.Write(bufferReceived, 0, count) calls to store the received packets. This is very simple, but obviously will result in out of memory exception for large files. An alternative option is to use a FileStream, i.e. FileStream.Write(bufferReceived, 0, count). This way, no out of memory exceptions will occur, but what I'm unsure about is bad performance due to disk writes (which I don't want to occur as long as plenty of memory is still available) - I'd like to avoid disk access as much as possible, but I don't know of a way to control this. I did some testing and most of the time, there seems to be little performance difference between say 10 000 consecutive calls of MemoryStream.Write() vs FileStream.Write(), but a lot seems to depend on buffer size and the total amount of data in question (i.e the number of writes). Obviously, MemoryStream size reallocation is also a factor. Does it make sense to use a combination of MemoryStream and FileStream, i.e. write to memory stream by default, but once the total amount of data received is over e.g. 500 MB, write it to FileStream; then, read in chunks from both streams for processing the received data (first process 500 MB from the MemoryStream, dispose it, then read from FileStream)? Another solution is to use a custom memory stream implementation that doesn't require continuous address space for internal array allocation (i.e. a linked list of memory streams); this way, at least on 64-bit environments, out of memory exceptions should no longer be an issue. Con: extra work, more room for mistakes. So how do FileStream vs MemoryStream read/writes behave in terms of disk access and memory caching, i.e. data size/performance balance. I would expect that as long as enough RAM is available, FileStream would internally read/write from memory (cache) anyway, and virtual memory would take care of the rest. But I don't know how often FileStream will explicitly access a disk when being written to. Any help would be appreciated.

    Read the article

  • Do you use an exception class in your Perl programs? Why or why not?

    - by daotoad
    I've got a bunch of questions about how people use exceptions in Perl. I've included some background notes on exceptions, skip this if you want, but please take a moment to read the questions and respond to them. Thanks. Background on Perl Exceptions Perl has a very basic built-in exception system that provides a spring-board for more sophisticated usage. For example die "I ate a bug.\n"; throws an exception with a string assigned to $@. You can also throw an object, instead of a string: die BadBug->new('I ate a bug.'); You can even install a signal handler to catch the SIGDIE psuedo-signal. Here's a handler that rethrows exceptions as objects if they aren't already. $SIG{__DIE__} = sub { my $e = shift; $e = ExceptionObject->new( $e ) unless blessed $e; die $e; } This pattern is used in a number of CPAN modules. but perlvar says: Due to an implementation glitch, the $SIG{DIE} hook is called even inside an eval(). Do not use this to rewrite a pending exception in $@ , or as a bizarre substitute for overriding CORE::GLOBAL::die() . This strange action at a distance may be fixed in a future release so that $SIG{DIE} is only called if your program is about to exit, as was the original intent. Any other use is deprecated. So now I wonder if objectifying exceptions in sigdie is evil. The Questions Do you use exception objects? If so, which one and why? If not, why not? If you don't use exception objects, what would entice you to use them? If you do use exception objects, what do you hate about them, and what could be better? Is objectifying exceptions in the DIE handler a bad idea? Where should I objectify my exceptions? In my eval{} wrapper? In a sigdie handler? Are there any papers, articles or other resources on exceptions in general and in Perl that you find useful or enlightening.

    Read the article

  • Downloading large file with php

    - by Alessandro
    Hi, I have to write a php script to download potentially large files. The file I'm reporting here works fine most of the times. However, if the client's connection is slow the request ends (with status code 200) in the middle of the downloading, but not always at the very same point, and not at the very same time. I tried to overwrite some php.ini variables (see the first statements) but the problem remains. I don't know if it's relevant but my hosting server is SiteGround, and for simple static file requests, the download works fine also with slow connections. I've found Forced downloading large file with php but I didn't understand mario's answer. I'm new to web programming. So here's my code. <?php ini_set('memory_limit','16M'); ini_set('post_max_size', '30M'); set_time_limit(0); include ('../private/database_connection.php'); $downloadFolder = '../download/'; $fileName = $_POST['file']; $filePath = $downloadFolder . $fileName; if($fileName == NULL) { exit; } ob_start(); session_start(); if(!isset($_SESSION['Username'])) { // or redirect to login (remembering this download request) $_SESSION['previousPage'] = 'download.php?file=' . $fileName; header("Location: login.php"); exit; } if (file_exists($filePath)) { header('Content-Description: File Transfer'); header('Content-Type: application/octet-stream'); //header('Content-Disposition: attachment; filename='.$fileName); header("Content-Disposition: attachment; filename=\"$fileName\""); header('Content-Transfer-Encoding: binary'); header('Expires: 0'); header('Cache-Control: must-revalidate, post-check=0, pre-check=0'); //header('Pragma: public'); header('Content-Length: ' . filesize($filePath)); ob_clean(); flush(); // download // 1 // readfile($filePath); // 2 $file = @fopen($filePath,"rb"); if ($file) { while(!feof($file)) { print(fread($file, 1024*8)); flush(); if (connection_status()!=0) { @fclose($file); die(); } } @fclose($file); } exit; } else { header('HTTP/1.1 404 File not found'); exit; } ?>

    Read the article

  • PHP Array problems....if anyone can assist!

    - by Homer_J
    First off, the code which brings back my data into an array: function fetch_questions($page) { global $link; $proc = mysqli_prepare($link, "SELECT * FROM tques WHERE page = $page"); mysqli_stmt_bind_param($proc, "i", $page); mysqli_stmt_execute($proc); $rowq = array(); stmt_bind_assoc($proc, $rowq); // loop through all result rows while ($proc->fetch()) { // print_r($rowq); } mysqli_stmt_close($proc); mysqli_clean_connection($link); return($rowq); } Now, when I `print_r($rowq);' I get the following, which is all good: Array ( [questions] => q1 [qnum] => 1 [qtext] => I find my job meaningful [page] => 1 ) Array ( [questions] => q2 [qnum] => 2 [qtext] => I find my job interesting [page] => 1 ) Array ( [questions] => q3 [qnum] => 3 [qtext] => My work supports ABC's objective [page] => 1 ) Array ( [questions] => q4 [qnum] => 4 [qtext] => I am able to balance my work and home life [page] => 1 ) Array ( [questions] => q5 [qnum] => 5 [qtext] => I am clear about what is expected of me in my job [page] => 1 ) Array ( [questions] => q6 [qnum] => 6 [qtext] => My induction helped me to settle into my job [page] => 1 ) Array ( [questions] => q7 [qnum] => 7 [qtext] => I understand the ABC vision [page] => 1 ) Array ( [questions] => q8 [qnum] => 8 [qtext] => I know how what I do fits into my team's objectives [page] => 1 ) Now, in my php page I have the following piece of script: $questions = fetch_questions($page); And when I print_r $questions, as below: print_r($questions); I only get the following back from the array, 1 row: Array ( [questions] => q8 [qnum] => 8 [qtext] => I know how what I do fits into my team's objectives [page] => 1 ) Any ideas why that might be? Thanks in advance, Homer.

    Read the article

  • Specializing a template on a lambda in C++0x

    - by Tony A.
    I've written a traits class that lets me extract information about the arguments and type of a function or function object in C++0x (tested with gcc 4.5.0). The general case handles function objects: template <typename F> struct function_traits { template <typename R, typename... A> struct _internal { }; template <typename R, typename... A> struct _internal<R (F::*)(A...)> { // ... }; typedef typename _internal<decltype(&F::operator())>::<<nested types go here>>; }; Then I have a specialization for plain functions at global scope: template <typename R, typename... A> struct function_traits<R (*)(A...)> { // ... }; This works fine, I can pass a function into the template or a function object and it works properly: template <typename F> void foo(F f) { typename function_traits<F>::whatever ...; } int f(int x) { ... } foo(f); What if, instead of passing a function or function object into foo, I want to pass a lambda expression? foo([](int x) { ... }); The problem here is that neither specialization of function_traits<> applies. The C++0x draft says that the type of the expression is a "unique, unnamed, non-union class type". Demangling the result of calling typeid(...).name() on the expression gives me what appears to be gcc's internal naming convention for the lambda, main::{lambda(int)#1}, not something that syntactically represents a C++ typename. In short, is there anything I can put into the template here: template <typename R, typename... A> struct function_traits<????> { ... } that will allow this traits class to accept a lambda expression?

    Read the article

  • codeIgniter: pass parameter to a select query from previous query

    - by krike
    I'm creating a little management tool for the browser game travian. So I select all the villages from the database and I want to display some content that's unique to each of the villages. But in order to query for those unique details I need to pass the id of the village. How should I do this? this is my code (controller): function members_area() { global $site_title; $this->load->model('membership_model'); if($this->membership_model->get_villages()) { $data['rows'] = $this->membership_model->get_villages(); $id = 1;//this should be dynamic, but how? if($this->membership_model->get_tasks($id)): $data['tasks'] = $this->membership_model->get_tasks($id); endif; } $data['title'] = $site_title." | Your account"; $data['main_content'] = 'account'; $this->load->view('template', $data); } and this is the 2 functions I'm using in the model: function get_villages() { $q = $this->db->get('villages'); if($q->num_rows() > 0) { foreach ($q->result() as $row) { $data[] = $row; } return $data; } } function get_tasks($id) { $this->db->select('name'); $this->db->from('tasks'); $this->db->where('villageid', $id); $q = $this->db->get(); if($q->num_rows() > 0) { foreach ($q->result() as $task) { $data[] = $task; } return $data; } } and of course the view: <?php foreach($rows as $r) : ?> <div class="village"> <h3><?php echo $r->name; ?></h3> <ul> <?php foreach($tasks as $task): ?> <li><?php echo $task->name; ?></li> <?php endforeach; ?> </ul> <?php echo anchor('site/add_village/'.$r->id.'', '+ add new task'); ?> </div> <?php endforeach; ?> ps: please do not remove the comment in the first block of code!

    Read the article

  • Refactoring Singleton Overuse

    - by drharris
    Today I had an epiphany, and it was that I was doing everything wrong. Some history: I inherited a C# application, which was really just a collection of static methods, a completely procedural mess of C# code. I refactored this the best I knew at the time, bringing in lots of post-college OOP knowledge. To make a long story short, many of the entities in code have turned out to be Singletons. Today I realized I needed 3 new classes, which would each follow the same Singleton pattern to match the rest of the software. If I keep tumbling down this slippery slope, eventually every class in my application will be Singleton, which will really be no logically different from the original group of static methods. I need help on rethinking this. I know about Dependency Injection, and that would generally be the strategy to use in breaking the Singleton curse. However, I have a few specific questions related to this refactoring, and all about best practices for doing so. How acceptable is the use of static variables to encapsulate configuration information? I have a brain block on using static, and I think it is due to an early OO class in college where the professor said static was bad. But, should I have to reconfigure the class every time I access it? When accessing hardware, is it ok to leave a static pointer to the addresses and variables needed, or should I continually perform Open() and Close() operations? Right now I have a single method acting as the controller. Specifically, I continually poll several external instruments (via hardware drivers) for data. Should this type of controller be the way to go, or should I spawn separate threads for each instrument at the program's startup? If the latter, how do I make this object oriented? Should I create classes called InstrumentAListener and InstrumentBListener? Or is there some standard way to approach this? Is there a better way to do global configuration? Right now I simply have Configuration.Instance.Foo sprinkled liberally throughout the code. Almost every class uses it, so perhaps keeping it as a Singleton makes sense. Any thoughts? A lot of my classes are things like SerialPortWriter or DataFileWriter, which must sit around waiting for this data to stream in. Since they are active the entire time, how should I arrange these in order to listen for the events generated when data comes in? Any other resources, books, or comments about how to get away from Singletons and other pattern overuse would be helpful.

    Read the article

  • Moving Function With Arguments To RequireJS

    - by Jazimov
    I'm not only relatively new to JavaScript but also to RequireJS (coming from string C# background). Currently on my web page I have a number of JavaScript functions. Each one takes two arguments. Imagine that they look like this: functionA(x1, y1) { ... } functionB(x2, y2) { ... } functionC(x3, y3) { ... } Currently, these functions exist in a tag on my HTML page and I simply call each as needed. My functions have dependencies on KnockoutJS, jQuery, and some other JS libraries. I currently have Script tags that synchronously load those external .js dependencies. But I want to use RequireJS so that they're loaded asynchronously, as needed. To do this, I plan to move all three functions above into an external .js file (a type of AMD "module") called MyFunctions.js. That file will have a define() call (to RequireJS's define function) that will look something like this: define(["knockout", "jquery", ...], function("ko","jquery", ...) {???} ); My question is how to "wrap" my functionA, functionB, and functionC functions where the ??? is above so that I can use those functions on my page as needed. For example, in the onclick event handler for a button on my HTML page, I would want to call functionA and pass two it two arguments; same for functionB and functionC. I don't fully understand how to expose those functions when they're wrapped in a define that itself is located in an external .js file. I know that define assures that my listed libraries are loaded asynchronously before the callback function is called, but after that's done I don't understand how the web page's script tags would use my functions. Would I need to use require to ensure they're available, such as: require(["myfunctions"],function({not sure what to put here})] I think I understand the basics of RequireJS but I don't understand how to wrap my functions so that they're in external .js files, don't pollute the global namespace, and yet can still be called from the main page so that arguments can be passed to them. I imagine they're are many ways to do this but in reviewing the RequireJS docs and some videos out there, I can't say I understand how... Thank you for any help.

    Read the article

  • [NHibernate and ASP.NET MVC] How can I implement a robust session-per-request pattern in my project,

    - by Guillaume Gervais
    I'm currently building an ASP.NET MVC project, with NHibernate as its persistance layer. For now, some functionnalities have been implemented, but only use local NHibernate sessions: each method that accessed the database (read or write) needs to instanciate its own NHibernate session, with the "using()" directive. The problem is that I want to leverage NHibernate's Lazy-Loading capabilities to improve the performance of my project. This implies an open NHibernate session per request until the view is rendered. Furthermore, simultaneous request must be supported (multiple Sessions at the same time). How can I achieve that as cleanly as possible? I searched the Web a little bit and learned about the session-per-request pattern. Most of the implementations I saw used some sort of Http* (HttpContext, etc.) object to store the session. Also, using the Application_BeginRequest/Application_EndRequest functions is complicated, since they get fired for each HTTP request (aspx files, css files, js files, etc.), when I only want to instanciate a session once per request. The concern that I have is that I don't want my views or controllers to have access to NHibernate sessions (or, more generally, NHibernate namespaces and code). That means that I do not want to handle sessions at the controller level nor the view one. I have a few options in mind. Which one seems the best ? Use interceptors (like in GRAILS) that get triggered before and after the controller action. These would open and close sessions/transactions. Is it possible in the ASP.NET MVC world? Use the CurrentSessionContext Singleton provided by NHibernate in a Web context. Using this page as an example, I think this is quite promising, but that still requires filters at the controller level. Use the HttpContext.Current.Items to store the request session. This, coupled with a few lines of code in Global.asax.cs, can easily provide me with a session on the request level. However, it means that dependencies will be injected between NHibernate and my views (HttpContext). Thank you very much!

    Read the article

< Previous Page | 564 565 566 567 568 569 570 571 572 573 574 575  | Next Page >