Search Results

Search found 17054 results on 683 pages for 'jms request reply'.

Page 422/683 | < Previous Page | 418 419 420 421 422 423 424 425 426 427 428 429  | Next Page >

  • Apache2 Segfault - need help interpreting this coredump (suspect cause is memcache / php session related)

    - by WayneDV
    Three Apache2 web servers running a PHP 5.2.3 web site. We're using Memcache to cache rendered pages but also as the storage engine of the PHP Sessions. At peak traffic times we're getting Apache segmentation faults on all three web servers and all HTTPD child processes segfault. My gut tells me that the increased Memcache traffic is stopping PHP sessions from being created or cleaned up and thus the processes die. Is it possible for someone to confirm that from the following? : #0 _zend_mm_free_int (heap=0x7fb67a075820, p=0x7fb67a011538) at /usr/src/debug/php-5.3.3/Zend/zend_alloc.c:2018 #1 0x00007fb665d02e82 in mmc_buffer_free (request=0x7fb67a011548) at /usr/src/debug/php-pecl-memcache-3.0.4/memcache-3.0.4/memcache_pool.c:50 #2 mmc_request_free (request=0x7fb67a011548) at /usr/src/debug/php-pecl-memcache-3.0.4/memcache-3.0.4/memcache_pool.c:169 #3 0x00007fb665d031ea in mmc_pool_free (pool=0x7fb67a00e458) at /usr/src/debug/php-pecl-memcache-3.0.4/memcache-3.0.4/memcache_pool.c:917 #4 0x00007fb665d0a2f1 in ps_close_memcache (mod_data=0x7fb66d625440) at /usr/src/debug/php-pecl-memcache-3.0.4/memcache-3.0.4/memcache_session.c:185 #5 0x00007fb66d1b0935 in php_session_save_current_state () at /usr/src/debug/php-5.3.3/ext/session/session.c:625 #6 php_session_flush () at /usr/src/debug/php-5.3.3/ext/session/session.c:1517 #7 0x00007fb66d1b0c1b in zm_deactivate_session (type=<value optimized out>, module_number=<value optimized out>) at /usr/src/debug/php-5.3.3/ext/session/session.c:2171 #8 0x00007fb66d2a719c in module_registry_cleanup (module=<value optimized out>) at /usr/src/debug/php-5.3.3/Zend/zend_API.c:2150 #9 0x00007fb66d2b1994 in zend_hash_reverse_apply (ht=0x7fb66d629d60, apply_func=0x7fb66d2a7180 <module_registry_cleanup>) at /usr/src/debug/php-5.3.3/Zend/zend_hash.c:755 #10 0x00007fb66d2a5c0d in zend_deactivate_modules () at /usr/src/debug/php-5.3.3/Zend/zend.c:866 #11 0x00007fb66d2541b5 in php_request_shutdown (dummy=<value optimized out>) at /usr/src/debug/php-5.3.3/main/main.c:1607 #12 0x00007fb66d32e037 in php_apache_request_dtor (r=0x7fb67a229658) at /usr/src/debug/php-5.3.3/sapi/apache2handler/sapi_apache2.c:509 #13 php_handler (r=0x7fb67a229658) at /usr/src/debug/php-5.3.3/sapi/apache2handler/sapi_apache2.c:681 #14 0x00007fb6784166f0 in ap_run_handler (r=0x7fb67a229658) at /usr/src/debug/httpd-2.2.15/server/config.c:158 #15 0x00007fb678419f58 in ap_invoke_handler (r=0x7fb67a229658) at /usr/src/debug/httpd-2.2.15/server/config.c:372 #16 0x00007fb6784254f0 in ap_process_request (r=0x7fb67a229658) at /usr/src/debug/httpd-2.2.15/modules/http/http_request.c:282 #17 0x00007fb678422418 in ap_process_http_connection (c=0x7fb67a2193a8) at /usr/src/debug/httpd-2.2.15/modules/http/http_core.c:190 #18 0x00007fb67841e1b8 in ap_run_process_connection (c=0x7fb67a2193a8) at /usr/src/debug/httpd-2.2.15/server/connection.c:43 #19 0x00007fb678429f4b in child_main (child_num_arg=<value optimized out>) at /usr/src/debug/httpd-2.2.15/server/mpm/prefork/prefork.c:662 #20 0x00007fb67842a21a in make_child (s=0x7fb679cd7860, slot=153) at /usr/src/debug/httpd-2.2.15/server/mpm/prefork/prefork.c:758 #21 0x00007fb67842aea4 in perform_idle_server_maintenance (_pconf=<value optimized out>, plog=<value optimized out>, s=<value optimized out>) at /usr/src/debug/httpd-2.2.15/server/mpm/prefork/prefork.c:893 #22 ap_mpm_run (_pconf=<value optimized out>, plog=<value optimized out>, s=<value optimized out>) at /usr/src/debug/httpd-2.2.15/server/mpm/prefork/prefork.c:1097 #23 0x00007fb678402890 in main (argc=1, argv=0x7fff6fecacb8) at /usr/src/debug/httpd-2.2.15/server/main.c:740 PHP.INI Follows: [PHP] engine = On short_open_tag = On asp_tags = Off precision = 14 y2k_compliance = On output_buffering = 4096 zlib.output_compression = Off implicit_flush = Off unserialize_callback_func = serialize_precision = 100 allow_call_time_pass_reference = Off safe_mode = Off safe_mode_gid = Off safe_mode_include_dir = safe_mode_exec_dir = safe_mode_allowed_env_vars = PHP_ safe_mode_protected_env_vars = LD_LIBRARY_PATH disable_functions = disable_classes = expose_php = On max_execution_time = 30 max_input_time = 60 memory_limit = 128M error_reporting = E_ALL & ~E_DEPRECATED display_errors = Off display_startup_errors = Off log_errors = Off log_errors_max_len = 1024 ignore_repeated_errors = Off ignore_repeated_source = Off report_memleaks = On track_errors = Off html_errors = Off variables_order = "GPCS" request_order = "GP" register_globals = Off register_long_arrays = Off register_argc_argv = Off auto_globals_jit = On post_max_size = 8M magic_quotes_gpc = Off magic_quotes_runtime = Off magic_quotes_sybase = Off auto_prepend_file = auto_append_file = default_mimetype = "text/html" doc_root = user_dir = enable_dl = Off file_uploads = On upload_max_filesize = 2M allow_url_fopen = On allow_url_include = Off default_socket_timeout = 60 [Date] [filter] [iconv] [intl] [sqlite] [sqlite3] [Pcre] [Pdo] [Phar] [Syslog] define_syslog_variables = Off [mail function] SMTP = localhost smtp_port = 25 sendmail_path = /usr/sbin/sendmail -t -i mail.add_x_header = On [SQL] sql.safe_mode = Off [ODBC] odbc.allow_persistent = On odbc.check_persistent = On odbc.max_persistent = -1 odbc.max_links = -1 odbc.defaultlrl = 4096 odbc.defaultbinmode = 1 [MySQL] mysql.allow_persistent = On mysql.max_persistent = -1 mysql.max_links = -1 mysql.default_port = mysql.default_socket = mysql.default_host = mysql.default_user = mysql.default_password = mysql.connect_timeout = 60 mysql.trace_mode = Off [MySQLi] mysqli.max_links = -1 mysqli.default_port = 3306 mysqli.default_socket = mysqli.default_host = mysqli.default_user = mysqli.default_pw = mysqli.reconnect = Off [PostgresSQL] pgsql.allow_persistent = On pgsql.auto_reset_persistent = Off pgsql.max_persistent = -1 pgsql.max_links = -1 pgsql.ignore_notice = 0 pgsql.log_notice = 0 [Sybase-CT] sybct.allow_persistent = On sybct.max_persistent = -1 sybct.max_links = -1 sybct.min_server_severity = 10 sybct.min_client_severity = 10 [bcmath] bcmath.scale = 0 [browscap] [Session] session.save_handler = files session.save_path = "/var/lib/php/session" session.use_cookies = 1 session.use_only_cookies = 1 session.name = PHPSESSID session.auto_start = 1 session.cookie_lifetime = 0 session.cookie_path = / session.cookie_domain = session.cookie_httponly = session.serialize_handler = php session.gc_probability = 1 session.gc_divisor = 1000 session.gc_maxlifetime = 1440 session.bug_compat_42 = Off session.bug_compat_warn = Off session.referer_check = session.entropy_length = 0 session.entropy_file = session.cache_limiter = nocache session.cache_expire = 180 session.use_trans_sid = 0 session.hash_function = 0 session.hash_bits_per_character = 5 url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=fakeentry" [MSSQL] mssql.allow_persistent = On mssql.max_persistent = -1 mssql.max_links = -1 mssql.min_error_severity = 10 mssql.min_message_severity = 10 mssql.compatability_mode = Off mssql.secure_connection = Off [Assertion] [COM] [mbstring] [gd] [exif] [Tidy] tidy.clean_output = Off [soap] soap.wsdl_cache_enabled=1 soap.wsdl_cache_dir="/tmp" soap.wsdl_cache_ttl=86400 /etc/php.d/memcached.ini : session.save_path="tcp://memcache1:11211?persistent=1&weight=1&timeout=3&retry_interval=15"

    Read the article

  • http proxy caching headers

    - by David Hagan
    I have a service for which I'm about to upgrade the authentication. However, I'm trying to ensure that I make the right decision about where the encryption algorithms occur. I currently have two options: option 1) the authentication module is deployed to the client as a javascript library over https and executes client-side, so that the client can POST back an encrypted string. option 2) the authentication module is kept server-side so that the client need only POST back an unencrypted string. I know that many http proxies cache/log the query-string (and therefore any query parameters), but does anyone know of any http proxies that cache the headers as well? If the headers are being cached, then I'll clearly want to encrypt the password inside the SSL encryption, because to my understanding the headers of an HTTPS request may not always be encrypted (depending on the capabilities of the browser etcetera). Can anyone shed any light on the caching of headers by http proxies? Do you have one that does, or know of one that does?

    Read the article

  • Generic Repository with SQLite and SQL Compact Databases

    - by Andrew Petersen
    I am creating a project that has a mobile app (Xamarin.Android) using a SQLite database and a WPF application (Code First Entity Framework 5) using a SQL Compact database. This project will even eventually have a SQL Server database as well. Because of this I am trying to create a generic repository, so that I can pass in the correct context depending on which application is making the request. The issue I ran into is my DataContext for the SQL Compact database inherits from DbContext and the SQLite database inherits from SQLiteConnection. What is the best way to make this generic, so that it doesn't matter what kind of database is on the back end? This is what I have tried so far on the SQL Compact side: public interface IRepository<TEntity> { TEntity Add(TEntity entity); } public class Repository<TEntity, TContext> : IRepository<TEntity>, IDisposable where TEntity : class where TContext : DbContext { private readonly TContext _context; public Repository(DbContext dbContext) { _context = dbContext as TContext; } public virtual TEntity Add(TEntity entity) { return _context.Set<TEntity>().Add(entity); } } And on the SQLite side: public class ElverDatabase : SQLiteConnection { static readonly object Locker = new object(); public ElverDatabase(string path) : base(path) { CreateTable<Ticket>(); } public int Add<T>(T item) where T : IBusinessEntity { lock (Locker) { return Insert(item); } } }

    Read the article

  • Apache2 alias in virtual host

    - by 0x7c00
    I have multiple virtual host in one server and plan to has some alias setup in one virtualhost. So I add the Alias /foo/ /path/to/foo/ in virtualhost directive,but it has no effect. request of host1/foo/ will return 404. But if I add this to /etc/apache2/mods-available/alias.conf, it works. But the problem is host2 will also share this alias. Is there a way to make the alias work only for host1? B.T.W, I use apache2ctl -l, there's no mod_alias.c listed, weird.

    Read the article

  • Is there a security risk for allowing people to set their DNS so their own subdomains can be route to my server?

    - by DantheMan
    Lets say that I have a web application, built in Django and deployed with Nginx. Is it a good idea to offer a service that allows customers to request that a subdomain can be pointed at it. I figured this: If I dont allow this, then some companies wont want to access the service from http://mydjangoappmadeupname.com/bigcorporation/ They would rather access it through http://service.bigcorporation.com That would effectively mask that they are using an outside resource. Is there a significant risk that I am overlooking? Also do you think it would be easier to just set things up in Django to handle it, allowing Nginx to accept all domains and then pushing them to Django which would filter out if they are allowed or not, or would it be better to just update my Nginx log each time a client wanted this changed?

    Read the article

  • Phishing attack stuck with jsp loginAction.do page?

    - by user970533
    I'm testing a phishing website on a staged replica of an jsp web-application. I'm doing the usual attack which involves changing the post and action field of source code to divert to my own written jsp script capture the logins and redirect the victim to the original website. It looks easy, but trust me, it's has been me more then 2 weeks and I cannot write the logins to the text file. I have tested the jsp page on my local wamp server it works fine. In staged, when I click on the ok button for user/password field I'm taken to loginAction.do script. I checked this using the tamper data add-on on Firefox. The only way I was able to make my script run was to use burp proxy intercept the request and change action parameter to refer my uploaded script. I want to know what does an loginAction.do? I have googled it - it's quite common to see it in jsp application. I have checked the code; there is nothing that tells me why the page always points to the .do script instead of mine. Is there some kind of redirection in Tomcat? I like to know. I'm unable to exploit this attack vector? I need the community's help.

    Read the article

  • BPM 11gR1 now available on Amazon EC2

    - by Prasen Palvankar
    BPM 11gR1 now available on Amazon EC2The new Oracle BPM 11gR1, including the latest Oracle SOA Suite 11gR1 Patchset-2 is now available as an Amazon Machine Image (AMI). This is a fully configured image which requires absolutely no installation and lets you get hands on experience with the software within minutes. This image has all the required software installed and configured and includes the following:Oracle 11g Database Standard Edition Oracle SOA Suite 11gR1 Patch-set 2Oracle BPM 11gR1Oracle Webcenter with BPM Process SpacesOracle Universal Content ManagementOracle JDeveloper with SOA and BPM pluginsNote: Use of this AMI requires acceptance of Oracle Technology Network (OTN) terms of use.To use this AMI, follow these steps: Login to your Amazon account and browse to Amazon AWS Console. If this is the first time you are using Amazon Web Services please visit https://aws.amazon.com/ec2/ for information on Amazon Elastic Cloud Computing and how to get started with Amazon EC2Make sure your security group that you will be using to launch the instance allows the following ports to be opened:22 (SSH)1521, 7001, 8001, 8888, 9001Click on AMIsChange the Viewing filters to 64-bit and enter soa-bpm in the search box. You should see the following AMI:083342568607/oracle-soa-bpm-11gr1-ps2-4.1-pubSelect the AMI and click on Launch or Spot Request. For more information on spot requests, please visit the Amazon EC2 link aboveAccept all the defaults and launch the instanceWhen the instance state changes to running, copy the assigned public host name and connect to it using either PuTTY or SSH command. For PuTTY usage, refer to this document.Once you are connected to the instance using PuTTY or SSH, you will be presented with the terms of use.Accept the terms of use to proceed. This will prompt you to set passwords for your oracle OS login as well as for VNC. Note that the instance will not be usable until you have accepted the terms of use.The instance is now ready to use. The SOA/BPM and other servers are automatically started once you accept the term of use. Initial startups can take about 5-10 minutes.If you would like to use the JDeveloper installed in the AMI, you can access it either using VNC or NX. You can get the NX client from NoMachine./home/oracle/README.txt contains all the URLs that you can use to access the Enterprise Manager, BPM Composer, BPM Workspace, Webcenter etc.

    Read the article

  • Extract data from specific range of cells in multiple worksheet in multiple files.

    - by Michele
    Extract data from specific range of cells(always the same cells) in multiple worksheet in multiple files. 1 file=1 day. I have 6 technicians each day of the week, Monday thru Friday. So, 5 files with 6 worksheets. I have entered specific info in specific cells of every work sheet. The range is constant(the same address in EVERY worksheet in every file.) So, I need a formula to extract and calculate the data in the given range and dump it into another spreadsheet. I can forward an example a file if it will help anyone to answer my question. Or more explanation if necessary is available upon request. JUST PLEASE SOMEBODY HELP ME!!!!! Thank you all in advance. Regards, Michele

    Read the article

  • Silverlight Cream for June 22, 2011 -- #1111

    - by Dave Campbell
    In this Issue: Kunal Chowdhury, Beth Massi, Mike Taulty, Xpert360, and Erno de Weerd. Above the Fold: Silverlight: "Silverlight, HTML and the WebBrowser Control for Offline Apps" Mike Taulty WP7: "Windows Phone 7 (Mango) Tutorial - 18 - Know about various Phone Tasks" Kunal Chowdhury LightSwitch: "How to Create a Simple Audit Trail (Change Log) in LightSwitch" Beth Massi From SilverlightCream.com: Windows Phone 7 (Mango) Tutorial - 18 - Know about various Phone Tasks Kunal Chowdhury has number 18 in his Mango series up and is discussing WP7.1 Microsoft.Phone.Tasks namespace classes How to Create a Simple Audit Trail (Change Log) in LightSwitch Beth Massi's latest is a demo of building an audit trail to track changes to records in LightSwitch Silverlight, HTML and the WebBrowser Control for Offline Apps Mike Taulty takes a good hard look at the WebBrowser control ... and all the permutations and gyrations. If you're using or going to use this control, you definitely want to read this article. PivotViewer Shorts Part 5: Invert Facet Category Selections Xpert360 has his 5th tutorial up on PivotViewer, covering the topic of inverting the facet category's selections... per reader request. Windows Phone 7: Drawing graphics for your application with Inkscape – Part III: Backgrounds Erno de Weerd has his 3rd tutorial in his series using Inkscape to draw graphics for your WP7 app... this one is on Background images, and staying within in the Marketplace guidelines of course Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • verisign certificate into jboss server SSL

    - by rfders
    Hi all, i'm trying to enable jboss to uses ssl protocol using a previously generated certificate from verisign, i imported both certificate, server certificate and ca certificate into the keytore file, and i configured the server.xml to use that keystore and activate ssl protocol, then when i run the jboss, I got this error "certificate or key corresponds to the SSL cipher suites which are enabled" Question, reading some post on internet, i found that every example was made it generating a Certificate Request, it stricly necesary to do that if i already have the server certificate and that CSR has to be imported into the keystore as well ? at this point i'm very confused about this issue, i tried almost every solutions posted in several forums but till now i haven't any luck !! can you give me some tips in order to solve this problem. thanks in advance this are my keystore file: Keystore type: jks Keystore provider: SUN Your keystore contains 2 entries j2ee, Dec 29, 2009, trustedCertEntry, Certificate fingerprint (MD5): 69:CC:2D:2A:2D:EF:C4:DB:A2:26:35:57:06:29:7D:4C ugent, Dec 29, 2009, trustedCertEntry, Certificate fingerprint (MD5): AC:D8:0E:A2:7B:B7:2C:E7:00:DC:22:72:4A:5F:1E:92 and my server.xml configuration:

    Read the article

  • CNet router - no field for private port

    - by Aadit M Shah
    I'm trying to configure port forwarding on my CNet router for a locally hosted HTTP server. The model number of my router is CQR-981 and the firmware version is 1.0.43. The problem is that there's no field to enter the private port of the HTTP server (the local port). According to the manual there should be one. Here's a picture of the manual: Here's a screenshot of my router page for port forwarding (with no field for private port): Is there some way I can circumvent this problem. Perhaps manually make an HTTP request to the HTTP server on the router to update the table with the private port number, or perhaps update my firmware to solve this problem.

    Read the article

  • Are you a GPGPU developer? Participate in our UX study

    - by Daniel Moth
    You know that I work on the parallel debugger in Visual Studio and I've talked about GPGPU before and I have also mentioned UX. Below is a request from my UX colleagues that pulls all of it together. If you write and debug parallel code that uses GPUs for non-graphical, computationally intensive operations keep reading. The Microsoft Visual Studio Parallel Computing team is seeking developers for a 90-minute research study. The study will take place via LiveMeeting or at a usability lab in Redmond, depending on your preference. We will walk you through an example of debugging GPGPU code in Visual Studio with you giving us step-by-step feedback. ("Is this what you would you expect?", "Are we showing you the things that would help you?", "How would you improve this") The walkthrough utilizes a “paper” version of our current design. After the walkthrough, we would then show you some additional design ideas and seek your input on various design tradeoffs. Are you interested or know someone who might be a good fit? Let us know at this address: [email protected]. Those who participate (and those who referred them), will receive a gratuity item from a list of current Microsoft products. Comments about this post welcome at the original blog.

    Read the article

  • Exception while bamboo is checking for changes in svn

    - by tangens
    While configuring a new test plan in Atlassian bamboo, I get the following exception in the log: Recording error: Unable to detect changes : <projectname> Caused by: org.tmatesoft.svn.core.SVNException: svn: Unrecognized SSL message, plaintext connection? svn: OPTIONS request failed on <repo-path> I tried to access the svn repository by hand and it worked (URL looks like this: https://myrepo.org/path/to/repo). The svn sits behind an apache http server. Both the bamboo server and the svn server are running Debian 4.0. Has anybody seen this error before and can give me a hint how to fix it?

    Read the article

  • Windows 7 / Windows Vista won't connect to 802.1x RADIUS Server

    - by Calvin Froedge
    I've deployed Radius and have no problems connecting with TTLS, PEAP, or MD5 using linux, mac, and windows xp. For Windows 7 and Vista, I'm never prompted with the dialog box to enter username & password after configuring 802.1x support on the client. Steps taken: Enabled Wired Autoconfig in services.msc Set to use PEAP Set to require user authentication When I enable the network connection it says "Trying to authenticate" then fails with no error log / message given. The radius server gives no indication that there was ever a request (no Access-Reject - the client simply never tries to authenticate). On the windows 7 client, I can see that the DHCP server does not assign an IP to the client when 802.1x is enabled on the client (though it does when it isn't). How can I debug this further? Has anyone else run into a similar situation? My radius server is freeradius on Ubuntu 11.10.

    Read the article

  • Install proprietary drivers 14.04 NVIDIA (steam segmentation issue)

    - by allthosemiles
    Recently, I finally got the official drivers for my NVIDIA 560 Ti card installed on Ubuntu 14.04 (hooray) However I started looking into installing Steam and I'm getting segmentation errors when I try to run the software. I tried installing 32-bit libs and it seemed like they weren't available or they were already installed. Upon further investigation, I found that a solution is to install the proprietary drivers, install steam then switch back to the other drivers. I'm not really sure what "proprietary drivers" are in all honesty. Has anyone gone through this process that could provide some insight here? (I installed the official 64-bit driver from the NVIDIA site for my 560 Ti just for reference. And the Ubuntu version installed is 64-bit as well) Update: This is the error text I get when trying to run steam after installing it via the ubuntu store. Running Steam on ubuntu 14.04 64-bit STEAM_RUNTIME is enabled automatically Installing breakpad exception handler for appid(steam)/version(1401381906_client) /home/dbrewer/.steam/steam.sh: line 755: 3943 Segmentation fault (core dumped) $STEAM_DEBUGGER "$STEAMROOT/$PLATFORM/$STEAMEXE" "$@" mv: cannot stat ‘/home/dbrewer/.steam/registry.vdf’: No such file or directory Installing bootstrap /home/dbrewer/.steam/bootstrap.tar.xz Reset complete! Restarting Steam by request... Running Steam on ubuntu 14.04 64-bit STEAM_RUNTIME has been set by the user to: /home/dbrewer/.steam/ubuntu12_32/steam-runtime Installing breakpad exception handler for appid(steam)/version(1401381906_client) /home/dbrewer/.steam/steam.sh: line 755: 4066 Segmentation fault (core dumped) $STEAM_DEBUGGER "$STEAMROOT/$PLATFORM/$STEAMEXE" "$@" What I get when I run "steam --reset" mv: cannot stat ‘/home/dbrewer/.steam/registry.vdf’: No such file or directory Installing bootstrap /home/dbrewer/.steam/bootstrap.tar.xz Reset complete!

    Read the article

  • How Facebook's Ad Bid System Works

    - by pnongrata
    When you are creating an ad on Facebook, you are provided with a "suggested bid" range (e.g., $0.90 - $2.15 USD). According to this page: The suggested bid range is there to help you pick a maximum bid so your ad will be successful. It’s based on how many other advertisers are competing to show their ad to the same audience as you are. I'm interested in understanding what's actually going on (technically) under the hood here. Say a user logs into Facebook. On the server-side, it the HTTP request that the user's browser sent (as part of the login) is handled, and the server needs to figure out which ad to display back to the user. I assume this is where the "bidding" system comes into play? Say that, based on this user's demographics, and based on the audience targeting that several competing advertisers designed their campaign with, let's pretend that Facebook sees a pool of 20 different ads it could return. How does this bidding system help Facebook determine which of the 20 ads it returns to the client-side? I'm guessing that advertisers who "bid more" get prioritized over those who "bid less". But when does this bidding take place? How often does an advertiser need to re-bid? How long is a bid binding for? Once I understand these usage-related concepts behind ads, it will probably be obvious between which of the following "selection strategies" the backend is using: Round robin Prioritized round robin Randomized (doubtful) History-based MVP-based Thanks to anyone who can help point me in the right direction and explain what these suggested bid systems are and how they work.

    Read the article

  • homebrew in mac lion

    - by user975352
    I'm beginner of mac lion(10.7.2). I don't know well about mac but ubuntu. I installed homebrew to my mac, and I did command below. $ brew install git and then $ brew update error: Could not resolve host: github.com; nodename nor servname provided, or not known while accessing https://github.com/mxcl/homebrew.git/info/refs fatal: HTTP request failed Error: Failed while executing git pull origin refs/heads/master:refs/remotes/origin/master What's happen in my mac? How to resolve this? Would you help me?

    Read the article

  • Apache gzip with chucked encoding

    - by hoodoos
    I'm expiriencing some problem with one of my data source services. As it says in HTTP response headers it's running on Apache-Coyote/1.1. Server gives responses with Transfer-Encoding: chunked, here sample response: HTTP/1.1 200 OK Server: Apache-Coyote/1.1 Content-Type: text/xml;charset=utf-8 Transfer-Encoding: chunked Date: Tue, 30 Mar 2010 06:13:52 GMT And problem is when I'm requesting server to send gzipped request it often sends not full response. I recieve response, see that last chunk recived, but then after ungzipping I see that response is partial. So my question is: is it common apache issue? maybe one of it's mod_deflate plugins or something? Ask questions if you need more info. Thanks.

    Read the article

  • Using NPS to restrict access to WLAN

    - by eric.s
    We currently have one WLAN that only domain users can connect to. We will be adding a guest WLAN and would like all non-domain machines to use this, even if a user has a domain account. We have set up NPS and can log in against it, but we can not restrict the connection option to be a domain computer AND a domain account. As a network policy it states that it moves along through each policy until it finds one that it accepts or runs out. For connection request policies Domain Computers is not an option. This is where I thought I may be able to stop it. Has anyone been able to successfully restrict this without manually adding MACs to the WLAN Controller?

    Read the article

  • Bizarre image loading problem from apache2

    - by NateDSaint
    Users have complained a few times about seeing a bizarre set of pink or green stripes on our webpage. At first I thought there were a rash of video card outages, but then someone sent me a screenshot from their browser (IE8). I later saw the same thing, but with slightly different colors on Chrome. Users have experienced this on their iPads and iPhones (iOS Safari). Because I've optimized the site to cache images, the bad image stays around until you clear your cache, so once you do, it resolves itself. My assumption is that the transmission of the image is being cut off mid-stream and then staying that way, but I can't for the life of me figure out why. Here's what I've checked: Header length is being sent, and transmission looks okay (wget sample below): wget http://www.superiorlivestock.com/templates/sla2/images/wallbg2.jpg --2012-04-05 08:46:00-- http://www.superiorlivestock.com/templates/sla2/images/wallbg2.jpg Resolving www.superiorlivestock.com (www.superiorlivestock.com)... [ip redacted] Connecting to www.superiorlivestock.com (www.superiorlivestock.com)|[ip redacted]|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 45926 (45K) [image/jpeg] Saving to: `wallbg2.jpg' Images are not being served gzipped (apache conf below): SetOutputFilter DEFLATE SetEnvIfNoCase Request_URI \.(?:gif|jpe?g|png)$ no-gzip dont-vary The site is www.superiorlivestock.com, and here's a sample of the bad page load: Is there something obvious I'm missing? Am I saving my images in the wrong format somehow?

    Read the article

  • What kind of server do I need to handle 10 million requests and mySQL queries a day?

    - by Calvin
    I'm a new bie of server administration and I'm looking for a powerful hosting service to host my new website. This website is basically a back-end of an mobile online game, and it will: handle up to 10 million of HTTPS request and mySQL queries a day store up to 2000 GB file on the hard disk transfer probably 5000 GB data in and out per month it runs on PHP and mySQL have 10 million records in mySQL database, for each record there are 5-10 fields, around 100 bytes each I really don't know what kind of a server I need to handle these requirements, my question is: what cpu/ram do I need for a dedicated server or vps? what hosting companies are able to offer this kind of dedicated server or VPS? what about cloud computing? I've researched Amazon EC2 but it seems complicated to me. And I've contacted Rackspace but strangely they said Cloudsites is not suitable for my requirements. I wonder if there is other cloud hosting company. any other alternative method? thanks very much!

    Read the article

  • Transient VO : Powerful J2EE Design Pattern

    - by Vijay Mohan
    We had a use-case wherein, the communication has to happen between regions residing under differenet taskfows. Essentially, they had a common set of parameters to be used. Initially, we resorted to the  use of pageFlowScope variables, but they are tightly coupled with the individual task flows. So, how the communication has to happen..?Some of the alternatives that we brainstormed into are - 1.usage of adf contextual event - This is a powerful feature indeed for such use-cases, but there is a considerable cost involved with it. So, before resorting to it, you have to make sure that you have good enough reason to use it.It actually does a server roundtrip and also the issue of an event and listening part to it is also something which requires your attention !!2.Use a transientVO with shared data control scope - with shared data control scope, the transient VO rows would be persistent across the task flows in your application. All you have to do is to create the attributes in the transientVO(prefereably with the same names - for the ease of conversion) and create some utility methods in VOImpl for creating row, updating row and deleting a row. You also have to make sure that the vo row is initialized per http request( this you can do in a bookmark method of your index.jspx - residing in adfc-config.xml), else the ui fields binded to the transient vo attributes won't render in UI.Hope, this helps and this should be a common use-case across apps.

    Read the article

  • Can't reconnect to my RDP session

    - by Jeremy Stein
    I use a VM through RDP. When I'm done for the day, I just disconnect the session and reconnect in the morning. That allows me to pick up what I was doing and not close all the applications. As I'm the only user, this generally works well. Today, I can't reconnect to my session from yesterday. When I RDP, I get a new session. When I run query user, I can see my other session: USERNAME SESSIONNAME ID STATE IDLE TIME LOGON TIME me rdp-tcp#82 1 Active 15:00 4/22/2010 9:00 AM me rdp-tcp#91 2 Active . 4/30/2010 9:00 AM If I try to use Terminal Services Manager to remote control the other session, I get this error: Session (ID 1) remote control failed (Error 7044 - The request to control another session remotely was denied. ) Is there any way to reconnect to this session, or do I need to just kill it?

    Read the article

  • Random timeouts in IIS7

    - by Cor-Paul
    Hi, I have a weird problem which I think is caused by my IIS7 installation on Vista 64 bit. I have a bunch of AJAX calls and JS dynamic file loads (~30) in a local application, and I get random timeouts (or so it seems) in my browser. In Chrome it looks like the page just stops loading (no HDD activity), in Firefox/Firebug I can see that some of the files are being loaded but they never actually finish. When I reload the page the same occurs, but for (random) other files that must be loaded. When I try to load one of those JS files concurrently (so during the timeout in FF) in another browser, the file loads there. So I am pretty sure the request can be handled by IIS. I am thinking about a limit or so on simultaneous requests from the same browser which is not working correctly, but I am pretty clueless on how to solve this. Does anyone recognize this problem and know a solution? Thanks!

    Read the article

  • Hosting multiple sites on a single webapp in tomcat

    - by satish
    Scenario: I have a website - www.mydomain.com. Registered users will be given the choice of getting a permanent url to their account on mydomain.com as a subdomain like (username.mydomain.com) or they can opt to have their own domain like www.userdomain.com. So the user can access his/her account through the subdomain URL or their own hostname and the request should be forwarded to a specific url on mydomain.com. For example: xyz.mydomain.com or www.xyz.com should give the user account from www.mydomain.com/webapp/account?id=xyz. The user should be completely unaware about where the content is coming from. Setup: My website is running as a webapp in tomcat 5.5.28 with apache as the web server. I am using a VPS which means I have control over all the configuration files (apache, tomcat and dns server). Can you tell me what are the configurations needed to achieve the above scenario??

    Read the article

< Previous Page | 418 419 420 421 422 423 424 425 426 427 428 429  | Next Page >