Search Results

Search found 17538 results on 702 pages for 'request headers'.

Page 172/702 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • Nginx Reverse Proxy : post_action if proxy cache hit - Possbile?

    - by anonymous-one
    We have recently found out about nginxes post_action. We were wondering it there was a way to use this directive if a proxy cache hit is made? The flow we were hoping on is as follows: 1) User request comes in 2) If cache HIT goto A / If cache MISS goto B A) 1) Serve Cached Result A) 2) post_action to another url on the backend B) 1) Server request from backend B) 2) Store result from backend Any ideas if this is possible via post_action? Thanks!

    Read the article

  • Facebook, Twitter, Yahoo doesn't work. CDN problem. Akamai?

    - by Toktik
    Some sites doesn't work normally, they are open, without css, images and javascript errors... Facebook stucks on static.ak.fbcdn.net Twitter stucks on a1.twimg.com Yahoo stucks on l.yimg.com On firefox I'm receiving Waiting for ...(any of those). I can access facebook only with SSL. Like https://facebook.com I ping them, only receive request timed out. Update: When I ping static.ak.fbcdn.net I refer to a749.g.akamai.net, when I ping this server I get Request timed out.

    Read the article

  • List Squid's internal ip:port to external ip:port mapping table

    - by joshperry
    I'm assuming that squid keeps a list of internal ip:port that a request is made on and the matching external ip:port that the request is fulfilled with. In the case of a long transfer, such as a file download, it would be nice to be able to see which internal ip:port is downloading the file. I am able to see the traffic and get the external ip:port that squid is using easily with tcpdump or iptraf but I can't find a way to map this back to an internal ip:port.

    Read the article

  • How to rate-limit concurrent sessions with nginx or haproxy?

    - by bantic
    I'm currently using nginx to reverse-proxy requests from web clients that are doing long-polling to an upstream. Since we're doing long polling (as opposed to websockets), when a client connects it will make multiple http connections to the server in serial, re-establishing a connection every time the server sends it some data (or timing out and re-establishing if the server has nothing to say for 10 seconds). What I'd like to do is limit the number of concurrent web clients. Since the clients are constantly making new HTTP requests instead of keeping a single request open, it's a little tricky to count the total number of web clients (because it's not the same as total number of concurrently connected http clients). The method I've come up with is to track http requests by the originating IP address, and store the IP address somewhere with a TTL of 20 seconds. If a request comes in whose IP isn't recognized, then we check the total number of unexpired stored IP addresses; if that's less than the maximum then we allow this request through. And if a request comes in with an IP address that we can find in the look-up table that hasn't yet expired, then it is allowed through as well. All requests that are allowed through have their IPs added to the table (if not there before) and the TTL refreshed to 20 seconds again. I had actually whipped something together that worked correctly this way using nginx along with the Redis 2.0 Nginx Module (and the nginx lua module to simplify the conditional branching), using redis to store my IP addresses with a TTL (the SETEX command), and checking the table size with the DBSIZE command. This worked but the performance was horrible. nginx and redis ended up using lots of cpu and the machine could only handle a very small number of concurrent requests. The new stick-table and tracking counters that were added to Haproxy in version 1.5 (via a commission from serverfault) seem like they might be ideal to implement exactly this sort of rate limiting, because the stick-table can track IP addresses and automatically expire entries. However, I don't see an easy way to get a total count of the unexpired entries in the stick table, which would be necessary to know the number of connected web clients. I'm curious if anyone has any suggestions, for nginx or haproxy or even for something else not mentioned here that I haven't thought of yet.

    Read the article

  • 40k Event Log Errors an hour Unknown Username or bad password

    - by ErocM
    I am getting about 200k of these an hour: An account failed to log on. Subject: Security ID: SYSTEM Account Name: TGSERVER$ Account Domain: WORKGROUP Logon ID: 0x3e7 Logon Type: 4 Account For Which Logon Failed: Security ID: NULL SID Account Name: administrator Account Domain: TGSERVER Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc0000064 Process Information: Caller Process ID: 0x334 Caller Process Name: C:\Windows\System32\svchost.exe Network Information: Workstation Name: TGSERVER Source Network Address: - Source Port: - Detailed Authentication Information: Logon Process: Advapi Authentication Package: Negotiate Transited Services: - Package Name (NTLM only): - Key Length: 0 This event is generated when a logon request fails. It is generated on the computer where access was attempted. The Subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe. The Logon Type field indicates the kind of logon that was requested. The most common types are 2 (interactive) and 3 (network). The Process Information fields indicate which account and process on the system requested the logon. The Network Information fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases. The authentication information fields provide detailed information about this specific logon request. - Transited services indicate which intermediate services have participated in this logon request. - Package name indicates which sub-protocol was used among the NTLM protocols. - Key length indicates the length of the generated session key. This will be 0 if no session key was requested. On my server... I changed my adminstrative username to something else and since then I've been inidated with these messages. I found on http://technet.microsoft.com/en-us/library/cc787567(v=WS.10).aspx that the 4 means "Batch logon type is used by batch servers, where processes may be executing on behalf of a user without their direct intervention." which really doesn't shed any light on it for me. I checked the services and they are all logging in as local system or network service. Nothing for administrator. Anyone have any idea how I tell where these are coming from? I would assume this is a program that is crapping out... Thanks in advance!

    Read the article

  • Is it possible to configure TMG to impersonate a domain user for anonymous requests to a website?

    - by Daniel Root
    I would like to configure Forefront Threat Management Gateway (formerly ISA server) to impersonate a specific domain user for any anonymous request to a particular listener. For example, for any anonymous request to http://www.mycompany.com, I would like to serve up http://myinternal as though MYDOMAIN/GuestAccount were accessing the site. Is this even possible in ISA/TMG? If so, where do I go to configure this?

    Read the article

  • Group traffic shaping with traffic control?

    - by mmcbro
    I'm trying to limit the output bandwidth generated by an application with linux tc. This application sends me the source port of the request that I use has a filter to limit each user at a given downloadspeed. I feel that my setup could be managed way better if I had a better knowledge of linux tc. At the application level users are categorized as members of a group, each group have a limited bandwidth. Example : Members of group A : 512kbit/s Members of group B : 1Mbit/s Members of group C : 2Mbit/s When a user connects to the application, it retrieves the source port to the origin of the request from the user and sends me the source port and the bandwidth at which the user must be limited depending on group to which it belongs. With these informations I must add the appropriate rules so that the user (the source port in reality) is limited to the right bandwidth. If the user that connect isn't a member of any group it should be limited at a default bandwidth speed. I'm actually managing this by using a self made daemon that add or remove rules from when it receive a request from the application. With my little knowledge of tc I'm not able to limit other users (ones that aren't in a group, all others in fact) at a default speed and my configuration seems awful to me. Here is the base of my tc qdisc and classes : tc qdisc add dev eth0 root handle 1: htb tc class add dev eth0 parent 1: classid 1:1 htb rate 100mbps ceil 125mbps To classify a user at a given speed I have to add one subclass and then associate one filter to it : # a member of group A tc class add dev eth0 parent 1:1 classid 1:11 htb rate 512kbps ceil 512kbps # tts associated filter to match his source port tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport 50001 flowid 1:11 # a member of group A again tc class add dev eth0 parent 1:1 classid 1:12 htb rate 512kbps ceil 512kbps # tts associated filter to match his source port tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport 61524 flowid 1:12 # a member of group B again tc class add dev eth0 parent 1:1 classid 1:13 htb rate 1000kbps ceil 1000kbps # tts associated filter to match his source port tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport 57200 flowid 1:13 I already know that a source port could be the same if its coming from a different IP address the thing is the application is behind a proxy so I don't have to manage any IP address in that situation. I would like to know how to manage the fact that for all other users (request/source port, whatever you name it) could be limited at a given speed each. I mean that each connection should be able to use at max 100kbit/s for example, not a shared 100kbit/s. I also would like to know if there is a way to simplify my rules. I don't know if it is possible to use only one class per group and associate multiple filters to the same class so each users could be handled by one class and not one class per user. I appreciate any advice, thanks.

    Read the article

  • Apache httpOnly Cookie Information Disclosure CVE-2012-0053

    - by John
    A PCI compliance scan, on a CentOS LAMP server fails with this message. The server header and ServerSignature don't expose the Apache version. Apache httpOnly Cookie Information Disclosure CVE-2012-0053 Can this be resolved by simply specifying a custom ErrorDocument for the 400 Bad Request response? How is the scanner determining this vulnerability, is it invoking a bad request then looking to see if it's the default Apache 400 response?

    Read the article

  • Why does squid reject this multipart-form-data POST from curl?

    - by keturn
    This fails: $ curl --trace multipart-fail.log -F "source={}" http://127.0.0.1:3003/jslint With a squid status 417 error, ERR_INVALID_REQ. trace of failing curl request trace of successful curl request that uses urlencoding (curl -d) instead of multipart (curl -F) formatted version of squid's error message I've never had this in practice through a web browser, so it's probably curl usage instead of squid, but if I tell curl not to use the squid proxy, the web application on the other end accepts it just fine. (If there's a more appropriate StackExchange site for this, please let me know.)

    Read the article

  • a couple of questions about proxy server,vpn & how they works

    - by Q8Y
    I have a couple of questions that are related to security. Correct me if i'm wrong :) If I want to request something (ex: visiting www.google.com): my computer will request that then it will to the ISP then to my ISP proxy server that will take the request and act as a middle man in this situation ask for the site (www.google.com) and retrieve it then the proxy will send it back to me. I know that its being done like that. So, my question is that, in this situation my ISP knows everything and what I did request, and the proxy server is set by default (when I ask for an internet subscription). So, if I use here another proxy (lets assume that is a highly anonymous and my ISP can't detect my IP address from it), would I visit my ISP and then from my ISP it will redirect me to the new proxy server that I provide? Will it know that there is someone using another proxy? Or will it go to another network rather than my ISP? Because I didn't get the view clearly. This question is related to the first one. When I use a VPN, I know that VPN provides for me a tunneling, encryption and much more features that a proxy can't. So my data is travelling securely and my ISP can't know what I'm doing. But my questions are: From where is the tunneling started? Does it start after I visit the ISP network (since they are the one that are responsible for forwarding my data and requests)? If so, then not all my connection is tunneled in this way, there is a part that is not being tunneled. Since, every time I need to do anything I have to go to my ISP and ask to do that. Correct me if I misunderstand this. I know that VPN can let my computer be virtually in another place and access its resources (ex: be like in my office while I'm in my home. This is done via VPN). If I use a VPN service provider so that I can access the internet securely and without being monitored by my ISP. In this case, where is my encrypted data saved? Is it saved in my ISP or in the VPN service provider? If I use a VPN, does anyone on the internet know what I'm doing or who I am? Even the VPN service provider? Can they know me? I think they should know the person that is asking for this VPN service, am I right?

    Read the article

  • IIS 7.5 log to: sql server vs file

    - by stacker
    I want to know if get IIS to log directly to the sql server is resource costive, and a better solution maybe generate log files, and each hour import this files to sql server. Does it VERY big cost to log to sql server each request directly? The pages are open connection to the database anyway for each request.

    Read the article

  • How DNS server resolves when web servers are geographically distributed

    - by Supratik
    Hi A domain abc.com has two web servers located in two different location one in India and another in Malaysia. If the request are handled by the servers depending on the location from where the request originates then how DNS server resolves for such geographically distributed servers when my client system is configured to a local DNS server in Indian or a DNS server in Malyasia ? Warm Regards Supratik

    Read the article

  • What is the best IIS tracing tool you have used?

    - by Vivek
    I have spent majority of my career using and troubleshooting IIS Web Server. According to me the best thing happened to a Web admin is FRT (Failed Request Tracing) in IIS 7.0.I have used Event Tracing for Windows as well and FRT is as much helpful.Is there any such tracing tool which can give such good in-depth and greater understanding on request flow through the pipeline?

    Read the article

  • Varnish : Non-Cache/Data Fetch + Load-Balance

    - by xperator
    Someone commented at my previous question and said it's possible to do this with Varnish: Instead of : Client Request Varnish LB Backend Varnish LB Client I want to have (Direct reply from backend to client, instead of going through the LB) : Client Request Varnish LB Backend Client This is not working : sub vcl_pass { if (req.http.host ~ "^(www.)?example.com$") { set req.backend = baz; return (pass); } }

    Read the article

  • Monit checking URL follow redirects

    - by beck
    I am looking to use monit to keep an eye on my site. I want it to treat it the site like an external user so am testing the url but it doesn't seem to follow redirects. The content check is being performed on the html of the redirect. #request works: if failed url http://www.sharelatex.com/blog/posts/future.html content == "301" #request fails if failed url http://www.sharelatex.com/blog/posts/future.html content == "actual content" Finding out how to get the url check to follow 30X would be great.

    Read the article

  • Blocking specific IP requests

    - by user42908
    Hi, I own a VPS running Ubuntu with Apache stuff. Recently I am getting continous request from IP static-195.22.94.120.addr.tdcsong.se.54303 : 12337 I already installed the 'arno-iptables-firewall'. Have iptables blocking 195.22.94.120 Still then I get the request from that IP if i see via tcpdump. May I know what else i can do to protect my VPS? Thank you.

    Read the article

  • Why is my htaccess file preventing access to my MP3 file?

    - by Andrew
    My Zend Framework application has a public directory which contains an htaccess file. If the file isn't found in the public directory, it routes the request through the application. I have an MP3 file within my public directory, but the htaccess file is routing the request through the application! Do you see anything wrong with my htaccess file? AddDefaultCharset utf-8 RewriteEngine on RewriteRule ^Resources/.* - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule !\.(js|ico|gif|jpg|png|css|htm|html|php|pdf|doc|txt|swf|xml|mp3)$ /index.php [NC]

    Read the article

  • Internal Server Error on HTTPS SSL URL

    - by spike5792
    I am running cPanel/WHM on Apache server and have just installed an SSL certificate for a single domain. Domain/server is on a fixed dedicated IP address. I'm given the 'successfully installed' message when installing the SSL certificate, however when trying to visit the domain using https, the 500 Internal Server Error message appears: The server encountered an internal error or misconfiguration and was unable to complete your request. Additionally, a 500 Internal Server Error error was encountered while trying to use an ErrorDocument to handle the request.

    Read the article

  • How can I ask for a new dhcp lease on windows 7?

    - by Pat
    In windows7 how do I request a new dhcp lease ? What I need in the equivalent of the button "repair" on windows XP. The button "diagnose" seems to do a few things but not request a new dhcp lease if one is already available. Disabling and re-enabling the card does the trick but messes up any program capturing traffic on the interface.

    Read the article

  • ASP.NET MVC: An Error has occured when trying to create a controller

    - by Grayson Mitchell
    I have got the following error a few times in my MVC applications, and have only managed to get past it by recreating my entire solution from scratch. The error message says make sure there is a paramaterless public constructor, but of course there is one. What else could this error refer to? (It looks like it can't find the controller at all) Code where error occurs public void Page_Load(object sender, System.EventArgs e) { // Change the current path so that the Routing handler can correctly interpret // the request, then restore the original path so that the OutputCache module // can correctly process the response (if caching is enabled). string originalPath = Request.Path; HttpContext.Current.RewritePath(Request.ApplicationPath, false); IHttpHandler httpHandler = new MvcHttpHandler(); **httpHandler.ProcessRequest(HttpContext.Current);** HttpContext.Current.RewritePath(originalPath, false); } Error Message An error occurred when trying to create a controller of type 'Moe.Tactical.Ttas.Web.Controllers.TtasController'. Make sure that the controller has a parameterless public constructor.

    Read the article

  • xsd validation againts xsd generated class level validation

    - by Miral
    In my project I have very big XSD file which i use to validate some XML request and response to a 3rd party. For the above scenario I can have 2 approaches 1) Create XML and then validate against give XSD 2) Create classes from XSD with the help of XSD gen tool, add xtra bit of attirbutes and use them for validation. Validation in the second way will work somewhat in this manner, a) convert xml request/response into object with XML Serialization b) validate the object with custom attributes set on each property, i.e. Pass the object to a method which will validate the object by iterating through properties and its custom attributes set on the each property, and this will return a boolean value if the object validates and that determines whether the xml request is valid or not? Now the concern which approach is good in terms of performance and anything else???

    Read the article

  • Virtualbox on Ubuntu 12.04 and 3.5 kernel

    - by kas
    I have installed the 3.5 kernel under Ubuntu 12.04. When I install virtualbox I recieve the following error. Setting up virtualbox (4.1.12-dfsg-2ubuntu0.2) ... * Stopping VirtualBox kernel modules [ OK ] * Starting VirtualBox kernel modules * No suitable module for running kernel found [fail] invoke-rc.d: initscript virtualbox, action "restart" failed. Processing triggers for python-central ... Setting up virtualbox-dkms (4.1.12-dfsg-2ubuntu0.2) ... Loading new virtualbox-4.1.12 DKMS files... First Installation: checking all kernels... Building only for 3.5.0-18-generic Building initial module for 3.5.0-18-generic Error! Bad return status for module build on kernel: 3.5.0-18-generic (x86_64) Consult /var/lib/dkms/virtualbox/4.1.12/build/make.log for more information. * Stopping VirtualBox kernel modules [ OK ] * Starting VirtualBox kernel modules * No suitable module for running kernel found [fail] invoke-rc.d: initscript virtualbox, action "restart" failed. Setting up virtualbox-qt (4.1.12-dfsg-2ubuntu0.2) ... Does anyone know how I might be able to resolve this? Edit -- Here is the make.log DKMS make.log for virtualbox-4.1.12 for kernel 3.5.0-18-generic (x86_64) Mon Nov 19 12:12:23 EST 2012 make: Entering directory `/usr/src/linux-headers-3.5.0-18-generic' LD /var/lib/dkms/virtualbox/4.1.12/build/built-in.o LD /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/built-in.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/linux/SUPDrv-linux.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/SUPDrv.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/SUPDrvSem.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/alloc-r0drv.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/initterm-r0drv.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/memobj-r0drv.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/mpnotification-r0drv.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/powernotification-r0drv.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/linux/assert-r0drv-linux.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/linux/alloc-r0drv-linux.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/linux/initterm-r0drv-linux.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/linux/memobj-r0drv-linux.o /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/linux/memobj-r0drv-linux.c: In function ‘rtR0MemObjLinuxDoMmap’: /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/linux/memobj-r0drv-linux.c:1150:9: error: implicit declaration of function ‘do_mmap’ [-Werror=implicit-function-declaration] cc1: some warnings being treated as errors make[2]: *** [/var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/linux/memobj-r0drv-linux.o] Error 1 make[1]: *** [/var/lib/dkms/virtualbox/4.1.12/build/vboxdrv] Error 2 make: *** [_module_/var/lib/dkms/virtualbox/4.1.12/build] Error 2 make: Leaving directory `/usr/src/linux-headers-3.5.0-18-generic'

    Read the article

  • Reporting Services - It's a Wrap!

    - by smisner
    If you have any experience at all with Reporting Services, you have probably developed a report using the matrix data region. It's handy when you want to generate columns dynamically based on data. If users view a matrix report online, they can scroll horizontally to view all columns and all is well. But if they want to print the report, the experience is completely different and you'll have to decide how you want to handle dynamic columns. By default, when a user prints a matrix report for which the number of columns exceeds the width of the page, Reporting Services determines how many columns can fit on the page and renders one or more separate pages for the additional columns. In this post, I'll explain two techniques for managing dynamic columns. First, I'll show how to use the RepeatRowHeaders property to make it easier to read a report when columns span multiple pages, and then I'll show you how to "wrap" columns so that you can avoid the horizontal page break. Included with this post are the sample RDLs for download. First, let's look at the default behavior of a matrix. A matrix that has too many columns for one printed page (or output to page-based renderer like PDF or Word) will be rendered such that the first page with the row group headers and the inital set of columns, as shown in Figure 1. The second page continues by rendering the next set of columns that can fit on the page, as shown in Figure 2.This pattern continues until all columns are rendered. The problem with the default behavior is that you've lost the context of employee and sales order - the row headers - on the second page. That makes it hard for users to read this report because the layout requires them to flip back and forth between the current page and the first page of the report. You can fix this behavior by finding the RepeatRowHeaders of the tablix report item and changing its value to True. The second (and subsequent pages) of the matrix now look like the image shown in Figure 3. The problem with this approach is that the number of printed pages to flip through is unpredictable when you have a large number of potential columns. What if you want to include all columns on the same page? You can take advantage of the repeating behavior of a tablix and get repeating columns by embedding one tablix inside of another. For this example, I'm using SQL Server 2008 R2 Reporting Services. You can get similar results with SQL Server 2008. (In fact, you could probably do something similar in SQL Server 2005, but I haven't tested it. The steps would be slightly different because you would be working with the old-style matrix as compared to the new-style tablix discussed in this post.) I created a dataset that queries AdventureWorksDW2008 tables: SELECT TOP (100) e.LastName + ', ' + e.FirstName AS EmployeeName, d.FullDateAlternateKey, f.SalesOrderNumber, p.EnglishProductName, sum(SalesAmount) as SalesAmount FROM FactResellerSales AS f INNER JOIN DimProduct AS p ON p.ProductKey = f.ProductKey INNER JOIN DimDate AS d ON d.DateKey = f.OrderDateKey INNER JOIN DimEmployee AS e ON e.EmployeeKey = f.EmployeeKey GROUP BY p.EnglishProductName, d.FullDateAlternateKey, e.LastName + ', ' + e.FirstName, f.SalesOrderNumber ORDER BY EmployeeName, f.SalesOrderNumber, p.EnglishProductName To start the report: Add a matrix to the report body and drag Employee Name to the row header, which also creates a group. Next drag SalesOrderNumber below Employee Name in the Row Groups panel, which creates a second group and a second column in the row header section of the matrix, as shown in Figure 4. Now for some trickiness. Add another column to the row headers. This new column will be associated with the existing EmployeeName group rather than causing BIDS to create a new group. To do this, right-click on the EmployeeName textbox in the bottom row, point to Insert Column, and then click Inside Group-Right. Then add the SalesOrderNumber field to this new column. By doing this, you're creating a report that repeats a set of columns for each EmployeeName/SalesOrderNumber combination that appears in the data. Next, modify the first row group's expression to group on both EmployeeName and SalesOrderNumber. In the Row Groups section, right-click EmployeeName, click Group Properties, click the Add button, and select [SalesOrderNumber]. Now you need to configure the columns to repeat. Rather than use the Columns group of the matrix like you might expect, you're going to use the textbox that belongs to the second group of the tablix as a location for embedding other report items. First, clear out the text that's currently in the third column - SalesOrderNumber - because it's already added as a separate textbox in this report design. Then drag and drop a matrix into that textbox, as shown in Figure 5. Again, you need to do some tricks here to get the appearance and behavior right. We don't really want repeating rows in the embedded matrix, so follow these steps: Click on the Rows label which then displays RowGroup in the Row Groups pane below the report body. Right-click on RowGroup,click Delete Group, and select the option to delete associated rows and columns. As a result, you get a modified matrix which has only a ColumnGroup in it, with a row above a double-dashed line for the column group and a row below the line for the aggregated data. Let's continue: Drag EnglishProductName to the data textbox (below the line). Add a second data row by right-clicking EnglishProductName, pointing to Insert Row, and clicking Below. Add the SalesAmount field to the new data textbox. Now eliminate the column group row without eliminating the group. To do this, right-click the row above the double-dashed line, click Delete Rows, and then select Delete Rows Only in the message box. Now you're ready for the fit and finish phase: Resize the column containing the embedded matrix so that it fits completely. Also, the final column in the matrix is for the column group. You can't delete this column, but you can make it as small as possible. Just click on the matrix to display the row and column handles, and then drag the right edge of the rightmost column to the left to make the column virtually disappear. Next, configure the groups so that the columns of the embedded matrix will wrap. In the Column Groups pane, right-click ColumnGroup1 and click on the expression button (labeled fx) to the right of Group On [EnglishProductName]. Replace the expression with the following: =RowNumber("SalesOrderNumber" ). We use SalesOrderNumber here because that is the name of the group that "contains" the embedded matrix. The next step is to configure the number of columns to display before wrapping. Click any cell in the matrix that is not inside the embedded matrix, and then double-click the second group in the Row Groups pane - SalesOrderNumber. Change the group expression to the following expression: =Ceiling(RowNumber("EmployeeName")/3) The last step is to apply formatting. In my example, I set the SalesAmount textbox's Format property to C2 and also right-aligned the text in both the EnglishProductName and the SalesAmount textboxes. And voila - Figure 6 shows a matrix report with wrapping columns. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How to filter the jqGrid data NOT using the built in search/filter box

    - by Jimbo
    I want users to be able to filter grid data without using the intrinsic search box. I have created two input fields for date (from and to) and now need to tell the grid to adopt this as its filter and then to request new data. Forging a server request for grid data (bypassing the grid) and setting the grid's data to be the response data wont work - because as soon as the user tries to re-order the results or change the page etc. the grid will request new data from the server using a blank filter. I cant seem to find grid API to achieve this - does anyone have any ideas? Thanks.

    Read the article

  • HttpWebRequest and BindIPEndPointDelegate getting socket exception

    - by Evgeny Gavrin
    I've got c# code running on a computer with multiple network interfaces, and the following code to select an IP address for a HttpWebRequest to bind ServicePoint: HttpWebRequest request = (HttpWebRequest)WebRequest.Create(remoteFilename); request.KeepAlive = false; request.ServicePoint.BindIPEndPointDelegate = delegate( ServicePoint servicePoint, IPEndPoint remoteEndPoint, int retryCount) { return new IPEndPoint(IPAddress.Parse(ipAddr), 0); }; But it works only for one of the available network interfaces. Trying to access remote server through others throws an exception: System.Net.WebException: Unable to connect to the remote server --- System.Net.Sockets.SocketException: A socket operation was attempted to an unreachable network How can it be solved?

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >