Search Results

Search found 24207 results on 969 pages for 'anonymous users'.

Page 376/969 | < Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >

  • What does it mean that "Lisp can be written in itself?"

    - by Mason Wheeler
    Paul Graham wrote that "The unusual thing about Lisp-- in fact, the defining quality of Lisp-- is that it can be written in itself." But that doesn't seem the least bit unusual or definitive to me. ISTM that a programming language is defined by two things: Its compiler or interpreter, which defines the syntax and the semantics for the language by fiat, and its standard library, which defines to a large degree the idioms and techniques that skilled users will use when writing code in the language. With a few specific exceptions, (the non-C# members of the .NET family, for example,) most languages' standard libraries are written in that language for two very good reasons: because it will share the same set of syntactical definitions, function calling conventions, and the general "look and feel" of the language, and because the people who are likely to write a standard library for a programming language are its users, and particularly its designer(s). So there's nothing unique there; that's pretty standard. And again, there's nothing unique or unusual about a language's compiler being written in itself. C compilers are written in C. Pascal compilers are written in Pascal. Mono's C# compiler is written in C#. Heck, even some scripting languages have implementations "written in itself". So what does it mean that Lisp is unusual in being written in itself?

    Read the article

  • encrypt apache and mysql servers

    - by stormdrain
    I have a question about encrypting disks. I have 2 servers: 1 is apache for web/frontend and it talks to server 2 which is mysql. They are all for intranet only; no external access. I was looking into using PGP or GnuPG to encrypt the disks. I'm not clear, though, as to exactly how this would work. Where would the keys be stored? On the client? On apache? If there is a key on apache to access mysql, does there need to be a key for each user? If so, if key 1 is used to alter some data, would then that data be inaccessible to a user using key 2? And the apache key, would that only be accessible to users with local keys? Is encryption done on the fly? Does it degrade performance? What would be the best approach to encrypt the data on these servers, but have them accessible to users? Thanks!

    Read the article

  • Error while trying to install Community Engine: NameError - "Undefined local variable or method 'map

    - by floatingfrisbee
    I'm trying to install Community Engine using the instructions here: http://github.com/bborn/communityengine At first I thought it might be because I had Rails 2.3.5 and desert 0.5.3 which were higher versions than what was mentioned on the installation site. However moving to rails 2.3.4 and desert 0.5.2 did not work. Any ideas as to what might be going on? $ script/generate plugin_migration /usr/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/rails/gem_dependency.rb:119:Warning: Gem::Dependency#version_requirements is deprecat ed and will be removed on or after August 2010. Use #requirement /cygdrive/c/users/me/jesse/projects/ceng1/config/routes.rb:2: undefined local variable or method `map' for main:Object (NameError ) from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:147:in `load_without_new_constant _marking' from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:147:in `load_without_desert' from /usr/lib/ruby/gems/1.8/gems/desert-0.5.2/lib/desert/ruby/object.rb:18:in `load' from /usr/lib/ruby/gems/1.8/gems/desert-0.5.2/lib/desert/ruby/object.rb:32:in `__each_matching_file' from /usr/lib/ruby/gems/1.8/gems/desert-0.5.2/lib/desert/ruby/object.rb:17:in `load' from /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/routing/route_set.rb:286:in `load_routes!' from /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/routing/route_set.rb:286:in `each' from /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/routing/route_set.rb:286:in `load_routes!' from /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/routing/route_set.rb:266:in `reload!' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/initializer.rb:537:in `initialize_routing' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/initializer.rb:188:in `process' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/initializer.rb:113:in `send' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/initializer.rb:113:in `run' from /cygdrive/c/users/me/jesse/projects/ceng1/config/environment.rb:6 from /usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/commands/generate.rb:1 from /usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from script/generate:3

    Read the article

  • Unit test with live data

    - by Kurresmack
    Hey, I have googled this a little and didn't really find the answer I needed. I am working on a webpage in C# with MSSQL and LINQ for a customer. I want the users to be able to send messages to each other. So what I do is that I unit test this with live data. The problem is that I now depend on having at least 2 users who I know the ID of. Furthermore I have to clean up after my self. This leads to rather large unit tests that test alot in one test. Lets say I would like to update a user. That would mean that I would have to ceate the user, update it, and then delete it. This a lot of assertions in one unit test and if it fails with updating i have to manually delete it. If I would do it any other way, without live data, I would not fore sure be able to know that the data was present in the database after updating etc. What is the proper way to do this without having a test that tests a lot of functuality by it self?

    Read the article

  • How to record different authentication types (username / password vs token based) in audit log

    - by RM
    I have two types of users for my system, normal human users with a username / password, and delegation authorized accounts through OAuth (i.e. using a token identifier). The information that is stored for each is quite different, and are managed by different subsytems. They do however interact with the same tables / data within the system, so I need to maintain the audit trail regardless of whether human user, or token-based user modified the data. My solution at the moment is to have a table called something like AuditableIdentity, and then have the two types inheriting off that table (either in the single table, or as two seperate tables with 1 to 1 PK with AuditableIdentity. All operations would use the common AuditableIdentity PK for CreatedBy, ModifiedBy etc columns. There isn't any FK constraint on the audit columns, so any text can go in there, but I want an easy way to easily determine whether it was a human or system that made the change, and joining to the one AuditableIdentity table seems like a clean way to do that? Is there a best practice for this scenario? Is this an appropriate way of approaching the problem - or would you not bother with the common table and just rely on joins (to the two seperate un-related user / token tables) later to determine which user type matches which audit records?

    Read the article

  • What Scheme Does Ghuloum Use?

    - by Don Wakefield
    I'm trying to work my way through Compilers: Backend to Frontend (and Back to Front Again) by Abdulaziz Ghuloum. It seems abbreviated from what one would expect in a full course/seminar, so I'm trying to fill in the pieces myself. For instance, I have tried to use his testing framework in the R5RS flavor of DrScheme, but it doesn't seem to like the macro stuff: src/ghuloum/tests/tests-driver.scm:6:4: read: illegal use of open square bracket I've read his intro paper on the course, An Incremental Approach to Compiler Construction, which gives a great overview of the techniques used, and mentions a couple of Schemes with features one might want to implement for 'extra credit', but he doesn't mention the Scheme he uses in the course. Update I'm still digging into the original question (investigating options such as Petit Scheme suggested by Eli below), but found an interesting link relating to Gholoum's work, so I am including it here. [Ikarus Scheme](http://en.wikipedia.org/wiki/Ikarus_(Scheme_implementation)) is the actual implementation of Ghuloum's ideas, and appears to have been part of his Ph.D. work. It's supposed to be one of the first implementations of R6RS. I'm trying to install Ikarus now, but the configure script doesn't want to recognize my system's install of libgmp.so, so my problems are still unresolved. Example The following example seems to work in PLT 2.4.2 running in DrEd using the Pretty Big (require lang/plt-pretty-big) (load "/Users/donaldwakefield/ghuloum/tests/tests-driver.scm") (load "/Users/donaldwakefield/ghuloum/tests/tests-1.1-req.scm") (define (emit-program x) (unless (integer? x) (error "---")) (emit " .text") (emit " .globl scheme_entry") (emit " .type scheme_entry, @function") (emit "scheme_entry:") (emit " movl $~s, %eax" x) (emit " ret") ) Attempting to replace the require directive with #lang scheme results in the error message foo.scm:7:3: expand: unbound identifier in module in: emit which appears to be due to a failure to load tests-driver.scm. Attempting to use #lang r6rs disables the REPL, which I'd really like to use, so I'm going to try to continue with Pretty Big. My thanks to Eli Barzilay for his patient help.

    Read the article

  • Algorithm To Select Most Popular Places from Database

    - by Russell C.
    We have a website that contains a database of places. For each place our users are able to take one of the follow actions which we record: VIEW - View it's profile RATING - Rate it on a scale of 1-5 stars REVIEW - Review it COMPLETED - Mark that they've been there WISH LIST - Mark that they want to go there FAVORITE - Mark that it's one of their favorites In our database table of places each place contains a count of the number of times each action above was taken as well as the average rating given by users. views ratings avg_rating completed wishlist favorite What we want to be able to do is generate lists of the top places using the above information. Ideally, we would want to be able to generate this list using a relatively simple SQL query without needing to do any legwork to calculate additional fields or stack rank places against one another. That being said, since we only have about 50,000 places we could run a nightly cron job to calculate some fields such as rankings on different categories if it would make a meaningful difference in the overall results of our top places. I'd appreciate if you could make some suggestions on how we should think about bubbling the best places to the top, which criteria we should weight more heavily, and given that information - suggest what the MySQL query would need to look like in order to select the top 10 places. One thing to note is that at this time we are less concerned with the recency of a place being popular - meaning that looking at the aggregate information is fine and that more recent data doesn't need to be weighted more heavily. Thanks in advance for your help & advice!

    Read the article

  • Mahout - Error when try out wikipedia exmaples

    - by Li'
    Note this post is similar to Caused by: java.lang.ClassNotFoundException: classpath but different error message. When I try to run Wikipedia Bayes Example from https://cwiki.apache.org/confluence/display/MAHOUT/Wikipedia+Bayes+Example When I ran the following command : lis-macbook-pro:mahout-distribution-0.8 Li$ mahout wikipediaXMLSplitter -d examples/temp/enwiki-latest-pages-articles10.xml -o wikipedia/chunks -c 64 I got error message: MAHOUT_LOCAL is set, so we don't add HADOOP_CONF_DIR to classpath. MAHOUT_LOCAL is set, running locally SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/Users/Li/File/Java/mahout-distribution-0.8/examples/target/mahout-examples-0.8-job.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/Users/Li/File/Java/mahout-distribution-0.8/examples/target/dependency/slf4j-jcl-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.JCLLoggerFactory] Oct 21, 2013 4:25:47 PM org.slf4j.impl.JCLLoggerAdapter warn WARNING: Unable to add class: wikipediaXMLSplitter java.lang.ClassNotFoundException: wikipediaXMLSplitter at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:171) at org.apache.mahout.driver.MahoutDriver.addClass(MahoutDriver.java:236) at org.apache.mahout.driver.MahoutDriver.main(MahoutDriver.java:127) I am using Hadoop 1.2 and Mahout 0.8. mahout-distribution-0.8/bin has been added to $PATH. $MAHOUT_LOCAL is set to "True", so it runs locally. I dont know why I got "Unable to add class: wikipediaXMLSplitter"

    Read the article

  • find(:all) and then add data from another table to the object

    - by Koning Baard XIV
    I have two tables: create_table "friendships", :force => true do |t| t.integer "user1_id" t.integer "user2_id" t.boolean "hasaccepted" t.datetime "created_at" t.datetime "updated_at" end and create_table "users", :force => true do |t| t.string "email" t.string "password" t.string "phone" t.boolean "gender" t.datetime "created_at" t.datetime "updated_at" t.string "firstname" t.string "lastname" t.date "birthday" end I need to show the user a list of Friendrequests, so I use this method in my controller: def getfriendrequests respond_to do |format| case params[:id] when "to_me" @friendrequests = Friendship.find(:all, :conditions => { :user2_id => session[:user], :hasaccepted => false }) when "from_me" @friendrequests = Friendship.find(:all, :conditions => { :user1_id => session[:user], :hasaccepted => false }) end format.xml { render :xml => @friendrequests } format.json { render :json => @friendrequests } end end I do nearly everything using AJAX, so to fetch the First and Last name of the user with UID user2_id (the to_me param comes later, don't worry right now), I need a for loop which make multiple AJAX calls. This sucks and costs much bandwidth. So I'd rather like that getfriendrequests also returns the First and Last name of the corresponding users, so, e.g. the JSON response would not be: [ { "friendship": { "created_at": "2010-02-19T13:51:31Z", "user1_id": 2, "updated_at": "2010-02-19T13:51:31Z", "hasaccepted": false, "id": 11, "user2_id": 3 } }, { "friendship": { "created_at": "2010-02-19T16:31:23Z", "user1_id": 2, "updated_at": "2010-02-19T16:31:23Z", "hasaccepted": false, "id": 12, "user2_id": 4 } } ] but rather: [ { "friendship": { "created_at": "2010-02-19T13:51:31Z", "user1_id": 2, "updated_at": "2010-02-19T13:51:31Z", "hasaccepted": false, "id": 11, "user2_id": 3, "firstname": "Jon", "lastname": "Skeet" } }, { "friendship": { "created_at": "2010-02-19T16:31:23Z", "user1_id": 2, "updated_at": "2010-02-19T16:31:23Z", "hasaccepted": false, "id": 12, "user2_id": 4, "firstname": "Mark", "lastname": "Gravell" } } ] I thought of a for loop in the getfriendrequests method, but I don't know how to implement this, and maybe there is an easier way. It must also work for XML. Can anyone help me? Thanks

    Read the article

  • Splitting MS Access Database - Front End Part Location

    - by kristof
    One of the best practices as specified by Microsoft for Access Development is splitting Access application into 2 parts; Front End that hold all the object except tables and the Back End that holds the tables. The msdn page links there to the article Splitting Microsoft Access Databases to Improve Performance and Simplify Maintainability that describes the process in details. It is recommended that in multi user environment the Back End is stored on the server/shared folder while the Front End is distributed to each user. That implies that each time there are any changes made to the front end they need to be deployed to every user machine. My question is: Assuming that the users themselves do not have rights to modify the Front End part of the application what would be the drawbacks/dangers of leaving this on the server as well next to the Back End copy? I can see the performance issues here, but are there any dangers here like possible corruptions etc? Thank you EDIT Just to clarify, the scenario specified in question assumes one Front End stored on the server and shared by users. I understand that the recommendation is to have FE deployed to each user machine, but my question is more about what are the dangers if that is not done. E.g. when you are given an existing solution that uses the approach of both FE and BE on the server. Assuming the the performance is acceptable and the customer is reluctant to change the approach would you still push the change? And why exactly? For example the danger of possible data corruption would definitely be the strong enough argument, but is that the case? It is a part of follow up of my previous question From SQL Server to MS Access 2007

    Read the article

  • Google App Engine - Uploading blobs and authentication

    - by Keyur
    (I tried asking this on the GAE forums but didn't get an answer so am trying it here.) Currently to upload blobs, the app engine's blob store service creates a unique one- time URL that a user can post blobs to. My requirement is that I only want authenticated / authorized users to post blobs in my application. I can achieve this currently if the page that includes the multipart form to upload blobs is in my application. However, I am looking to providing a "REST API" for my users to upload their blobs. While it is true that the one-time nature of the upload URL mitigates the chances of rogue use but it's still possible. I was wondering if there is anyone on the app engine team here that can consider a feature where developers can register an upload listener. (Or if there is already a way, I'll be all ears). A standard servlet filter could also potentially do the job. This will give us an opportunity to authenticate / validate / decorate requests before the request gets forwarded to the blob store service. Thanks, Keyur

    Read the article

  • Unit test insert/update/delete

    - by Kurresmack
    Hey, I have googled this a little and didn't really find the answer I needed. I am working on a webpage in C# with MSSQL and LINQ for a customer. I want the users to be able to send messages to each other. So what I do is that I unit test this with data that actually goes into the database. The problem is that I now depend on having at least 2 users who I know the ID of. Furthermore I have to clean up after my self. This leads to rather large unit tests that test alot in one test. Lets say I would like to update a user. That would mean that I would have to ceate the user, update it, and then delete it. This a lot of assertions in one unit test and if it fails with updating i have to manually delete it. If I would do it any other way, without saving the data to DB, I would not for sure be able to know that the data was present in the database after updating etc. What is the proper way to do this without having a test that tests a lot of functuality in one test?

    Read the article

  • Debugging fortran code in Eclipse with Photran and GDB debugger: missing symbols

    - by tvandenbrande
    I have a program, written in fortran90, previously successfully compiled on a compaq compiler and working, that I'm now trying to compile with gfortran. I can compile the code to an .exe and run it. It works fine until a certain point in the routine and then an error is thrown. My current configuration: Windows 7 Eclipse Juno with CDT Photran Cygwin installation with gfortran compiler and GDB debugger (gdb.exe) Configurations for the debugger: GDB command set: Standard (Windows) Protocol: mi Shared libraries: don't load shared library symbols automatically (when activating this, no changes are noted). When running the debug command I get the following output: .gdbinit: No such file or directory. Reading symbols from /cygdrive/c/Users/thys/Documents/doctoraat/12_in progress/Hamfem/Debug/Hamfem.exe...done. auto-solib-add on Undefined command: "auto-solib-add". Try "help". Warning: C:/Users/thys/Documents/doctoraat/12_in progress/Hamfem/Hamfem/in: No such file or directory. [New Thread 5816.0x1914] [New Thread 5816.0x654] Basicly that leaves me with 2 questions: Where can I find the .gdbinit file in the cygwin installation? Are there any other possible errors in my setup, or points to think about?

    Read the article

  • Advice on designing and building distributed application to track vehicles

    - by dario-g
    I'm working on application for tracking vehicles. There will be about 10k or more vehicles. Each will be sending ~250bytes in each minute. Data contains gps location and everything from CAN Bus (every data that we can read from vehicle computer and dashboard). Data are sent by GSM/GPRS (using UDP protocol). Estimated rows with this data per day is ~2000k. I see there 3 main blocks. 1. Multithreaded Socket Server (MSS) - I have it. MSS stores received data to the queue (using NServiceBus). 2. Rule Processor Server (RPS) - this is core of this system. This block is responsible for parsing received data, storing in the database, processing rules, sending messages to Notifier Server (this will be sending e-mails/sms texts). Rule example. As I said between received bytes there will be information about current speed. When speed will be above 120 then: show alert in web application for specified users, send e-mail, send sms text. (There can be more than one instance of RPS). 3. Web application - allows reporting and defining rules by users, monitoring alerts, etc. I'm looking for advice how to design communication between RPS and Web application. Some questions: - Should Web application and RPS have separated databases or one central database will be enough? I have one domain model in web application. If there will be one central database then can I use the same model (objects) on RPS? So, how to send changed rules to RPS? I try to decouple this blocks as much as possible. I'm planning to create different instance of application for each client (each client will have separated database). One client will be have 10k vehicles, others only 100 vehicles.

    Read the article

  • Issues with securing ELMAH file

    - by Kumar
    I am trying to allow the access to the elmah.axd file for only a perticular login and all others should be denied. I have followed Phil Haack's tutorial for securing the file. <httpHandlers> <remove verb="*" path="*.asmx"/> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" validate="false"/> <add verb="POST,GET,HEAD" path="admin/elmah.axd" type="Elmah.ErrorLogPageFactory, Elmah" /> </httpHandlers> <location path="admin"> <system.web> <authorization> <allow users="[email protected]"/> <deny users="*"/> </authorization> </system.web> </location> First I logged in as [email protected] and tried to access the http://localhost:58961/admin/elmah.axd file and I am rightly being redirected to the Login.aspx page. Next I logged in as [email protected] and was able to access the elmah file at http://localhost:58961/admin/elmah.axd. Now I logged in again as [email protected] and I was able to access the emlah file now. What is the reason for this behavior?

    Read the article

  • SQL Server 2008 Cluster Installation - First network name always fails

    - by boflynn
    I'm testing failover clustering in Windows Server 2008 to host a SQL Server 2008 installation using this installation guide. My base cluster is installed and working properly, as well as clustering the DTC service. However, when it comes time to install SQL Server, my first attempt at installation always fails with the same message and seems to "taint" the network name. For example, with my previous cluster attempt, I was installing SQL Server as VSQL. After approximately 15 attempts of installation and trying to resolve the errors, e.g. changing domain accounts for SQL, setting SPNs, etc., I typoed the network name as VQSL and the installation worked. Similarly on my current cluster, I tried installing with the SQL service named PROD-C1-DB and got the same errors as last time until I tried changing the name to anything else, e.g. PROD-C1-DB1, SQL, TEST, etc., at which point the install works. It will even install to VSQL now. While testing, my install routine was: Run setup.exe from patched media, selecting appropriate options After the install fails, I'd chose "Remove node from a SQL Server failover cluster" and remove the single, failed, node Attempt to diagnose problem, inspect event logs, etc. Delete the computer account that was created for the SQL Service from Active Directory Delete the MSSQL10.MSSQLSERVER folder from the shared data drive The error message I receive from the SQL Server installer is: The following error has occurred: The cluster resource 'SQL Server' could not be brought online. Error: The group or resource is not in the correct state to perform the requested operation. (Exception from HRESULT: 0x8007139F) Along with hundreds of the following errors in the Application event log: [sqsrvres] checkODBCConnectError: sqlstate = 28000; native error = 4818; message = [Microsoft][SQL Server Native Client 10.0][SQL Server]Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. System configuration notes: Windows Server 2008 Enterprise Edition x64 SQL Server 2008 Enterprise Edition x64 using slipstreamed SP1+CU1 media Dell PowerEdge servers Fibre attached storage

    Read the article

  • Optimize GROUP BY&ORDER BY query

    - by Jan Hancic
    I have a web page where users upload&watch videos. Last week I asked what is the best way to track video views so that I could display the most viewed videos this week (videos from all dates). Now I need some help optimizing a query with which I get the videos from the database. The relevant tables are this: video (~239371 rows) VID(int), UID(int), title(varchar), status(enum), type(varchar), is_duplicate(enum), is_adult(enum), channel_id(tinyint) signup (~115440 rows) UID(int), username(varchar) videos_views (~359202 rows after 6 days of collecting data, so this table will grow rapidly) videos_id(int), views_date(date), num_of_views(int) The table video holds the videos, signup hodls users and videos_views holds data about video views (each video can have one row per day in that table). I have this query that does the trick, but takes ~10s to execute, and I imagine this will only get worse over time as the videos_views table grows in size. SELECT v.VID, v.title, v.vkey, v.duration, v.addtime, v.UID, v.viewnumber, v.com_num, v.rate, v.THB, s.username, SUM(vvt.num_of_views) AS tmp_num FROM video v LEFT JOIN videos_views vvt ON v.VID = vvt.videos_id LEFT JOIN signup s on v.UID = s.UID WHERE v.status = 'Converted' AND v.type = 'public' AND v.is_duplicate = '0' AND v.is_adult = '0' AND v.channel_id <> 10 AND vvt.views_date >= '2001-05-11' GROUP BY vvt.videos_id ORDER BY tmp_num DESC LIMIT 8 And here is a screenshot of the EXPLAIN result: So, how can I optimize this?

    Read the article

  • "Priming" a whole database in MSSQL for first-hit speed

    - by David Spillett
    For a particular apps I have a set of queries that I run each time the database has been restarted for any reason (server reboot usually). These "prime" SQL Server's page cache with the common core working set of the data so that the app is not unusually slow the first time a user logs in afterwards. One instance of the app is running on an over-specced arrangement where the SQL box has more RAM than the size of the database (4Gb in the machine, the DB is under 1.5Gb currently and unlikely to grow too much relative to that in the near future). Is there a neat/easy way of telling SQL Server to go away and load everything into RAM? It could be done the hard way by having a script scan sysobjects & sysindexes and running SELECT * FROM <table> WITH(INDEX(<index_name>)) ORDER BY <index_fields> for every key and index found, which should cause every used page to be read at least once and so be in RAM, but is there a cleaner or more efficient way? All planned instances where the database server is stopped are out-of-normal-working-hours (all the users are at most one timezone away and unlike me none of them work at silly hours) so such a process (until complete) slowing down users more than the working set not being primed at all would is not an issue.

    Read the article

  • mysql image disable print download

    - by Vish
    Hi, We use a Flex AIR client and a WAMP server. Tiff images are stored in MySQL. Currently, I can download the image from AIR client and it prompts for a download dialog. Things are fine till this point. We got a new requirement. Requirement is that only some users can print the image which gets downloaded. For other users, they should not be able to print the tiff image. Wondering how to accomplish this. One idea, not sure if its efficient, is to convert the image requested to pdf at the server side, disable print option there(hope there are API's available) and send back the pdf. Please let me know btter ideas. Also, is there a way to prevent file download dialog from popping up everytime the file is requested for download? Can we just get the file stream to the client and manipulate it to open with a particular viewer or write it to pdf... Please help.

    Read the article

  • (Zywall USG 300) NAT bypassed when accessing in-house-server From LAN Via domain name

    - by mschr
    My situations is like this; i host a number of websites from within our joint network solution. On the network is basically 3 categories: the known public, registered via mac, given static dhcp lease the anonymous lan connections, given lease from specific dhcp range switches, unix hosts firewall Now, consider following hosts which are of interest 111.111.111.111 (Zywall USG 300 WAN) 192.168.1.1 (ZyWall USG 300 LAN) load balances and bw monitors plus handles NAT 192.168.1.2 (Linux www) serves mydomain1.tld and mydomain2.tld 192.168.123.123 (Random LAN client) accesses mydomain1.tld from LAN 23.234.12.253 (Random External client) accesses mydomain1.tld via WAN DNS A records are setup so that both mydomain1.tld and mydomain2.tld points to 111.111.111.111 - and the Linux www serves the http parts with VirtualHost configurations, setting up the document roots pr ServerName, this is not so interesting though.. NAT rule translates 111.111.111.111:80 to 192.168.1.2:80 (1:1 NAT) Our problem follows; When accessing http://mydomain1.tld from outside (23.234.12.253 example host) the joint network - everything is fine, zywall receives requests via port 80 and maps it to the linux host' httpd. However - once trying to go through the NAT from LAN side (in-house, 192.168.123.123 example host) then one gets filtered in the Zywall port 80 firewall. I know this only because port 443 is open for administration interface and https://mydomain1.tld prompts for zywall login. So my conclusion is, that the LAN that accesses 111.111.111.111 in fact are routed to 192.168.1.1 whilst bypassing the NAT table. I need to know how to setup NAT / Policy Route, so that LAN WAN LAN will function with proper network translations instead of doing the 'quick nameserver lookup' or whatever this might be.

    Read the article

  • How to properly logout of facebook

    - by Gublooo
    hey guys This is a repeated question and I have followed both the suggestions provided in these links: http://stackoverflow.com/questions/1386557/how-to-log-out-users-using-facebook-connect-in-php-and-zend/1386749#1386749 http://stackoverflow.com/questions/1546277/trouble-logging-out-of-a-facebook-connect-site-and-destroying-sessions The issue is - the code works 90% of the time. Thats the weird part. Out of the 100 times I've logged in and out - I've experienced this problem 5-6 times and 2 of my beta test users have reported the same issue. So when it works- if u click the logout link - u get the facebook popup saying - you being logged out - when it does'nt work - absolutely nothing happens - the page does not refresh - it just sits on that page doing nothing. This is the javascript code that gets called on clicking logout function logout() { FB.Connect.get_status().waitUntilReady(function(status) { switch(status) { case FB.ConnectState.connected: FB.Connect.logoutAndRedirect("http://www.example.com/login/logout"); break; case FB.ConnectState.userNotLoggedIn: window.location = "http://www.example.com/login/logout"; break; } }); return false; } This is the php code: $this-_auth-clearIdentity(); $face = Zend_Registry::get('facebook'); $fb = new Facebook($face['appapikey'], $face['appsecret']); //$fb-clear_cookie_state(); $fb-expire_session(); Anyone experienced such sporadic issues. Thanks

    Read the article

  • Apache with JBOSS using AJP (mod_jk) giving spikes in thread count.

    - by Beginner
    We used Apache with JBOSS for hosting our Application, but we found some issues related to thread handling of mod_jk. Our website comes under low traffic websites and has maximum 200-300 concurrent users during our website's peak activity time. As the traffic grows (not in terms of concurrent users, but in terms of cumulative requests which came to our server), the server stopped serving requests for long, although it didn't crash but could not serve the request till 20 mins. The JBOSS server console showed that 350 thread were busy on both servers although there was enough free memory say, more than 1-1.5 GB (2 servers for JBOSS were used which were 64 bits, 4 GB RAM allocated for JBOSS) In order to check the problem we were using JBOSS and Apache Web Consoles, and we were seeing that the thread were showing in S state for as long as minutes although our pages take around 4-5 seconds to be served. We took the thread dump and found that the threads were mostly in WAITING state which means that they were waiting indefinitely. These threads were not of our Application Classes but of AJP 8009 port. Could somebody help me in this, as somebody else might also got this issue and solved it somehow. In case any more information is required then let me know. Also is mod_proxy better than using mod_jk, or there are some other problems with mod_proxy which can be fatal for me if I switch to mod__proxy? The versions I used are as follows: Apache 2.0.52 JBOSS: 4.2.2 MOD_JK: 1.2.20 JDK: 1.6 Operating System: RHEL 4 Thanks for the help.

    Read the article

  • al32utf8 in oracle and SQL Server and DB2 pulling data

    - by Bob
    I have a non-utf8 oracle database running on 11.1.0.7. We need to support greek characters. So we have two options: use nvarchar, nclob fields for those fields that need greek (it is not all fields). We have tested this and gotten it to work with java coding. convert Oracle to AL32UTF8 database. I am not asking how to do this. I got this from the Oracle Site/Oracle Support. I know what is involved, lossy data, etc, increasing the size of the database. My question is we have users to our system that connect to our database with database links but work on SQL Server and IBM DB2 databases. I do not have access to those databases and I do not have experience with them. If they are not in UTF-8 databases what happens when they pull UTF8 data? I would assume that English/Ascii characters are fine and the greek will end up as junk data. I also ran Oracle Character set scanner (oracle command line utility you use to get info about the affects of a character set conversion). It says that my database will crease in sizez by about 20%. Does this have an affect on users with 3rd party databases? These are customers of our data and there is a limit to how much access I can have to them to run tests. Any information you have would be welcome.

    Read the article

  • How to make Facebook Authentication from Silverlight secure?

    - by SondreB
    I have the following scenario I want to complete: Website running some HTTP(S) services that returns data for a user. Same website is additionally hosting a Silverlight 4 app which calls these services. The Silverlight app is integrating with Facebook using the Facebook Developer Toolkit (http://facebooktoolkit.codeplex.com/). I have not fully decided whether I want Facebook-integration to be a "opt-in" option such as Spotify, or if I want to "lock" down my service with Facebook-only authentication. That's another discussion. How do I protect my API Key and Secret that I receive from Facebook in a Silverlight app? To me it's obvious that this is impossible as the code is running on the client, but is there a way I can make it harder or should I just live with the fact that third parties could potentially "act" as my own app? Using the Facebook Developer Toolkit, there is a following C# method in Silverlight that is executed from the JavaScript when the user has fully authenticated with Facebook using the Facebook Connect APIs. [ScriptableMember] public void LoggedIn(string sessionKey, string secret, int expires, long userId) { this.SessionKey = sessionKey; this.UserId = userId; Obvious the problem here is the fact that JavaScript is injection the userId, which is nothing but a simple number. This means anyone could potentially inject a different userId in JavaScript and have my app think it's someone else. This means someone could hijack the data within the services running on my website. The alternative that comes to mind is authenticating the users on my website, this way I'm never exposing any secrets and I can return an auth-cookie to the users after the initial authentication. Though this scenario doesn't work very well in an out-of-browser scenario where the user is running the Silverlight app locally and not from my website.

    Read the article

  • Authenticated user cannot log in, "The user does not exist or is not unique."

    - by Aquinas
    This is a weird one. I have a WSS3 site, no MOSS, with a custom membership and role provider that authenticates against CRM. All the users have also been added to the site user list so once logged in they have correct display names. On dev and stage everything works fine, but on UAT the users can't get past the login screen. The login screen is working, in that if you type an incorrect password for a user it comes back with the right message, meaning the custom provider is working fine. If you fill the login form in correctly you are immediately redirected straight back to the login screen, with the IIS logs showing that the login screen sent the authenticated user to the site and then was sent back. Setting the site to allow anonymous access shows that the user is not logged in on the site side after authenticating correctly. The ULS logs show: The user does not exist or is not unique. Found 1 trusted forests nzct.local. Found 0 trusted domains Adding logging code to the site I have verified that the membership provider is correctly set, and can find the user when asked. Also, when accessing the site user list, I can find the SP user with the right name. It just refuses to set the current user to be the authenticated user. Weird.

    Read the article

< Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >