Search Results

Search found 17771 results on 711 pages for 'mongodb query'.

Page 264/711 | < Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >

  • MySQL: Replicating the MySQL database

    - by Lee
    Hi guys, I have a primary write server (server1) which replications to two servers (server2 and server3) which are query servers. I am replicating all databases to these servers including the MySQL database. When i execute a GRANT as follows replication works perfectly.. GRANT execute,select ON database1.* TO `user1`@`host` IDENTIFIED BY 'password'; However if i did the same GRANT to alter permissions on an existing user without IDENTIFIED clause replication breaks.. Error 'Can't find any matching row in the user table' on query. Default database: 'mysql'. Query: 'GRANT execute,select ON database1.* TO `user`@`host`' If I try and run the query manually i get the same error.. Server 1: mysql> SHOW VARIABLES LIKE "%version%"; +-------------------------+------------------------------------------------------------+ | Variable_name | Value | +-------------------------+------------------------------------------------------------+ | protocol_version | 10 | | version | 5.0.77-log | **my.cnf** [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql old_passwords=1 symbolic-links=0 max_allowed_packet = 100M log-bin = /var/lib/mysql/logs/borg-binlog.log max_binlog_size=50M expire_logs_days=7 [mysql.server] user=mysql basedir=/var/lib [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid Server 2: mysql> SHOW VARIABLES LIKE "%version%"; +-------------------------+------------------------------------------------------------+ | Variable_name | Value | +-------------------------+------------------------------------------------------------+ | protocol_version | 10 | | version | 5.0.77-log | my.cnf server-id=12 master-host=x master-user=x master-password=x master-connect-retry=60 relay-log=/var/lib/mysql/borg-relay.log relay-log-index=/var/lib/mysql/borg-relay-log.index Thanks for taking a look Edit: Currently its running fine, until you do the grant which breaks it... mysql> show slave status\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 10.128.0.5 Master_User: repli-ragnarok Master_Port: 3306 Connect_Retry: 60 Master_Log_File: borg-binlog.002730 Read_Master_Log_Pos: 4375760 Relay_Log_File: borg-relay.005489 Relay_Log_Pos: 4375899 Relay_Master_Log_File: borg-binlog.002730 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 4375760 Relay_Log_Space: 4375899 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 1 row in set (0.00 sec) Edit: Broken show slave status from history +----------------------------------+-------------+----------------+-------------+---------------+--------------------+---------------------+-------------------+---------------+-----------------------+------------------+-------------------+-----------------+---------------------+--------------------+------------------------+-------------------------+-----------------------------+------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------+---------------------+-----------------+-----------------+----------------+---------------+--------------------+--------------------+--------------------+-----------------+-------------------+----------------+-----------------------+ | Slave_IO_State | Master_Host | Master_User | Master_Port | Connect_Retry | Master_Log_File | Read_Master_Log_Pos | Relay_Log_File | Relay_Log_Pos | Relay_Master_Log_File | Slave_IO_Running | Slave_SQL_Running | Replicate_Do_DB | Replicate_Ignore_DB | Replicate_Do_Table | Replicate_Ignore_Table | Replicate_Wild_Do_Table | Replicate_Wild_Ignore_Table | Last_Errno | Last_Error | Skip_Counter | Exec_Master_Log_Pos | Relay_Log_Space | Until_Condition | Until_Log_File | Until_Log_Pos | Master_SSL_Allowed | Master_SSL_CA_File | Master_SSL_CA_Path | Master_SSL_Cert | Master_SSL_Cipher | Master_SSL_Key | Seconds_Behind_Master | +----------------------------------+-------------+----------------+-------------+---------------+--------------------+---------------------+-------------------+---------------+-----------------------+------------------+-------------------+-----------------+---------------------+--------------------+------------------------+-------------------------+-----------------------------+------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------+---------------------+-----------------+-----------------+----------------+---------------+--------------------+--------------------+--------------------+-----------------+-------------------+----------------+-----------------------+ | Waiting for master to send event | 10.128.0.5 | repli-valhalla | 3306 | 60 | borg-binlog.002729 | 40429793 | borg-relay.005486 | 40311514 | borg-binlog.002729 | Yes | No | | | | | | | 1133 | Error 'Can't find any matching row in the user table' on query. Default database: 'mysql'. Query: 'GRANT execute,select ON auth_tracker.* TO `mail-sin1`@`%.sin1.netline.net.uk` IDENTIFIED BY 'mail-sin1666'' | 0 | 40311375 | 40429932 | None | | 0 | No | | | | | | NULL | +----------------------------------+-------------+----------------+-------------+---------------+--------------------+---------------------+-------------------+---------------+-----------------------+------------------+-------------------+-----------------+---------------------+--------------------+------------------------+-------------------------+-----------------------------+------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------+---------------------+-----------------+-----------------+----------------+---------------+--------------------+--------------------+--------------------+-----------------+-------------------+----------------+-----------------------+ 1 row in set (0.06 sec)

    Read the article

  • CodePlex Daily Summary for Monday, May 10, 2010

    CodePlex Daily Summary for Monday, May 10, 2010New ProjectsAzure Publish-Subscribe: Infrastructure implementing the publish-subscribe pattern in a Windows Azure context. The unit of publishing is an XML document with an optional l...Bakalarska prace SCSF: This work dealing with the technology of Microsoft Software Factories which makes possible an efficient development of applications under MS Windows.Begtostudy-Test: NoteExpress User Tool (NEUT) is a opensource project for NoteExpress user developpers to share their tools and ideas using secondary development. N...CodeReview: Code Review is an open source development tool based on the same approach than FxCop - check the compiled assemblies to enforce good practices rule...CommonFilter: CommonFilter is a subset of the CommonData project, containing just the functions and unit tests for filtering user input.Custom SharepointDesigner Actions: These are a couple of custom actions that i use in sharepoint designer to ease workflow creation.Customer Portal Accelerator for Microsoft Dynamics CRM: The Customer Portal accelerator for Microsoft Dynamics CRM provides businesses the ability to deliver portal capabilities to their customers while ...Danzica Asset Management System: Danzica is an asset management system written in C# that can ingest all of your hardware management systems into a single cohesive portal, giving y...Dot2Silverlight: Dot2Silverlight is a project thats enables to render graphs (written in Dot format) in Silverlight. dot2silverlight, dot, silverlight, C#, graphviz...eGrid_Windows7: This project will include a new platform independent version of the eGrid project. The new version will run on windows 7, using WPF 4 and the Surfa...Game project JAMK: game project on jamkHamcast for multi station coordination: Amateur Radio multiple station operation tends to have loggers and operators striving to get particular information from each other, like what IP a...Headspring Labs: Headspring Labs showcases presentations, samples, and starter kits developed by Headspring.Hongrui Software Development Management Platform: Hongrui Software Development Management Platform 鸿瑞软件开发管理平台(HrSDMP) LinkField: 带有链接的多列 Field migre.me plugin for Seesmic Desktop Platform: migre.me is a brazilian service created to reduces the size of URLs and provides tracking data for shortened links. migre.me plugin for Seesmic ...MISAO: MISAO is a presentation tool.Mongodb Management Studio: Mongodb Management Studio makes it easier for mongodb users (including DBA/Developers/Administrators) to use mongodb. It's developed in ASP.NET 4.0...MS Build for DotNetNuke module development: Automate the task of creating DotNetNuke module PA packs easily using MS Build. Create manifest files, include version #'s automatically and more.Rubyish: C# extensions providing a rubyish syntax to C#SharePoint 2007 Web Parts: The goal of this project is to develop a set of web parts for SharePoint 2007.SharePoint 2010 Web Parts: The goal of this project is to develop a set of web parts for SharePoint 2010.Taxomatic: Taxomatic adds the ability to bulk create Content Types and Columns to a SharePoint site collection. It also caters for the export of the Content T...trackuda: trackuda - track the motion!Web Camera Shooter: Small tool for taking screenshots from web camera and saving them to disk with few image filters as options.Windows API Code Pack Contrib: Extensions to Windows API Code PackNew ReleasesAlan Platform: Technical Preview 1: В центре данного релиза интерфейс, точнее не сам интерфейс, а принцип, по которому он построен. Используя парочку предоставленных свойств, можно со...Begtostudy-Test: Test: Don't Download.BFBC2 PRoCon: PRoCon 0.3.5.1: Release Notes ComingCBM-Command: 2010-05-09: Release Notes - 2010-05-09New Features FILE COPYING Changes Removed the Swap Panels functionality to make room for file copying. It's still in th...CBM-Command: 2010-05-10: Release Notes - 2010-05-10New Features Launching PRG Files New color schemes to better match the C128 and C64 platforms Function Keys Changes ...CommonFilter: CommonFilter0.3D: This initial release of CommonFilter 0.3D is a subset of the CommonData solution that contains just the filter functions.Custom SharepointDesigner Actions: Custom SPD Actions v1.0: This is the first version of the actions library, it includes: Calling a Webservice. Convert a string to a double. Convert a string to an integ...Customer Portal Accelerator for Microsoft Dynamics CRM: Customer Portal Accelerator for Dynamics CRM: The Customer Portal accelerator for Microsoft Dynamics CRM provides businesses the ability to deliver portal capabilities to their customers while ...EPiMVC - EPiServer CMS with ASP.NET MVC: EPiMVC CTP1: First release that mainly addresses routing. The release is described in greater detail here.Helium Frog Animator: Helium Frog 2_06SW3: This is a first release (not for end users as it is not packaged) of mods intended to make the program run easier on netbooks and customised for us...HouseFly controls: HouseFly controls alpha 0.9.8.0: HouseFly controls release 0.9.8.0 alphaID3Tag.Net: ID3TagLib.Net 1.1: The ID3Tag team is proud to release a new version of the ID3tag lib! New features: Removing of ID3V1 and ID3V2 tags Logging operations ( if enab...IT Tycoon: IT Tycoon 0.2.1: Switched to .NET Framework 4 and implemented some Parallel.ForEach calls.MeaMod Playme: MeaMod Playme 0.9.6.5 Bucking Bunny: MeaMod Playme 0.9.6.5 Bucking Bunny Version 0.9.6.5 | Change Set: 48050 -- Added Playme Store -- Added Buy Album -- Added OCDS/Core system -- Adde...migre.me plugin for Seesmic Desktop Platform: migre.me plugin 0.7.0.1: Initial release. Compatible with Seesmic Desktop Platform 0.7.0.772.MISAO: Ver. 5 Alpha(2010-05-10): Alpha versionMultiwfn: multiwfn1.3.2_source: multiwfn1.3.2_sourcePocket Wiki: Desktop Wiki (in dev): Full screen wiki for the PC - supports the same parsers that Pocket Wiki does. Currently in development but usable. Left side shows listbox of all ...Rubyish: alpha: intial buildSevenZipLib Library: v9.13 beta: New release to match 7-zip 9.13 betaShake - C# Make: Shake v0.1.11: Initial version of Shake's services (API), command line parameters (dynamic) now available via ShakeServices class. Introducing interfaces and bas...SharpNotes: SharpNotes (New): This is the release of SharpNotes.SharpNotes: SharpNotes Source (New): This is the source release of SharpNotes.StackOverflow Desktop Client in C# and WPF: StackOverflow Client 1.0: Improved UI Showing votes/answers/views on popups. Bug fixesTaxomatic: Design_0: Design documentThe Movie DB API: TMDB API v1.2: Updated to reflect changes in The Movie DB API.Web Camera Shooter: 1.0.0.0: Initial release. Unstable. Often exception AccessViolation from Touchless SDK.WF Personalplaner: Personalplaner v1.7.29.10127: - Drag und Drop wurde beim Plan und in der Maske unter Plan\Plan-Layout anpassen in die Grids eingefügt - Weitere kleine bugfixesWindows Phone 7 Panorama & Pivot controls: panorama + pivot controls v0.7 (samples included): Panorama and Pivot Controls source code + sample projects. - Phone.Controls.Samples : source code for the PanoramaControl and PivotControl. - Pic...XmlCodeEditor: Release 0.9 Alpha: Release 0.9 AlphaMost Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesASP.NETPHPExcelMost Active Projectspatterns & practices – Enterprise LibraryRawrThe Information Literacy Education Learning Environment (ILE)Mirror Testing SystemCaliburn: An Application Framework for WPF and SilverlightjQuery Library for SharePoint Web Serviceswhitepatterns & practices - UnityTweetSharpBlogEngine.NET

    Read the article

  • How to get ip-address out of SPAMHAUS blacklist?

    - by ???????? ????? ???????????
    I frequently read that it is possible to remove individual ip-addresses from SPAMHAUS blacklisting. OK. Here is 91.205.43.252 (91.205.43.251 - 91.205.43.253) used by back3.stopspamers.com (back2.stopspamers.com, back1.stopspamers.com) in geo-cluster on dedicated servers in Switzerland. The queries: http://www.spamhaus.org/query/bl?ip=91.205.43.251 http://www.spamhaus.org/query/bl?ip=91.205.43.252 http://www.spamhaus.org/query/bl?ip=91.205.43.253 tell that: 91.205.43.251 - 91.205.43.253 are all listed in the SBL80808 blacklist And SBL80808 blacklist tells: "Ref: SBL80808 91.205.40.0/22 is listed on the Spamhaus Block List (SBL) 01-Apr-2010 05:52 GMT | SR04 Spamming and now seems this place is involved in other fraud" 91.205.43.251-91.205.43.253 are not listed amongst criminal ip-addresses individually but there is no way to remove it individually from black listing. How to remove this individual (91.205.43.251-91.205.43.253) addresses from SPAMHAUS blacklist? And why the heck SPAMHAUS is blacklisting spam-stopping service? This is only one example of a bunch. My related posts: Blacklist IP database Update: From the answer provided I realized that my question was not even understood. This ip-addresses 91.205.43.251 - 91.205.43.253 are not blacklisted individually, they are blacklisted through its supernet 91.205.40.0/22. Also note that dedicated server, ISP and customer are in much different distant countries. Update2: http://www.spamhaus.org/sbl/sbl.lasso?query=SBL80808#removal tells: "To have record SBL80808 (91.205.40.0/22) removed from the SBL, the Abuse/Security representative of RIPE (or the Internet Service Provider responsible for supplying connectivity to 91.205.40.0/22) needs to contact the SBL Team" There are dozens of "abusers" in that blacklist SBL80808. The company using that dedicated server is not an ISP or RIPE representative to treat these issues. Even if to treat it, it is just a matter of pressing "Report spam" on internet to be again blacklisted, this is fruitless approach. These techniques are broadly used by criminals and spammers, See also this my post on blacklisting. This is just one specific example but there are many-many more.

    Read the article

  • Meet SQLBI at PASS Summit 2012 #sqlpass

    - by Marco Russo (SQLBI)
    Next week I and Alberto Ferrari will be in Seattle at PASS Summit 2012. You can meet us at our sessions, at a book signing and hopefully watching some other session during the conference. Here are our appointments: Thursday, November 08, 2012, 10:15 AM - 11:45 AM – Alberto Ferrari – Room 606-607 Querying and Optimizing DAX (BIA-321-S) Do you want to learn how to write DAX queries and how to optimize them? Don’t miss this session! Thursday, November 08, 2012, 12:00 PM - 12:30 PM – Bookstore Book signing event at the Bookstore corner with Alberto Ferrari, Marco Russo and Chris Webb Visit the bookstore and sign your copy of our Microsoft SQL Server 2012 Analysis Services: The BISM Tabular Model book. Thursday, November 08, 2012, 1:30 PM - 2:45 PM – Marco Russo – Room 611 Near Real-Time Analytics with xVelocity (without DirectQuery) (BIA-312) What’s the latency you can tolerate for your data? Discover what is the limit in Tabular without using DirectQuery and learn how to optimize your data model and your queries for a near real-time analytical system. Not a trivial task, but more affordable than you might think. Friday, November 09, 2012, 9:45 AM - 11:00 AM Parent-Child Hierarchies in Tabular (BIA-301) Multidimensional has a more advanced support for hierarchies than Tabular, but in reality you can do almost the same things by using data modeling, DAX functions and BIDS Helper!  Friday, November 09, 2012, 1:00 PM - 2:15 PM – Marco Russo – Room 612 Inside DAX Query Plans (BIA-403) Discover the query plan for your DAX query and learn how to read it and how to optimize a DAX query by using these information. If you meet us at the conference, stop us and say hello: it’s always nice to know our readers!

    Read the article

  • High Load mysql on Debian server stops every day. Why?

    - by Oleg Abrazhaev
    I have Debian server with 32 gb memory. And there is apache2, memcached and nginx on this server. Memory load always on maximum. Only 500m free. Most memory leak do MySql. Apache only 70 clients configured, other services small memory usage. When mysql use all memory it stops. And nothing works, need mysql reboot. Mysql configured use maximum 24 gb memory. I have hight weight InnoDB bases. (400000 rows, 30 gb). And on server multithread daemon, that makes many inserts in this tables, thats why InnoDB. There is my mysql config. [mysqld] # # * Basic Settings # default-time-zone = "+04:00" user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking default-time-zone='Europe/Moscow' # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. # # * Fine Tuning # #low_priority_updates = 1 concurrent_insert = ALWAYS wait_timeout = 600 interactive_timeout = 600 #normal key_buffer_size = 2024M #key_buffer_size = 1512M #70% hot cache key_cache_division_limit= 70 #16-32 max_allowed_packet = 32M #1-16M thread_stack = 8M #40-50 thread_cache_size = 50 #orderby groupby sort sort_buffer_size = 64M #same myisam_sort_buffer_size = 400M #temp table creates when group_by tmp_table_size = 3000M #tables in memory max_heap_table_size = 3000M #on disk open_files_limit = 10000 table_cache = 10000 join_buffer_size = 5M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #myisam_use_mmap = 1 max_connections = 200 thread_concurrency = 8 # # * Query Cache Configuration # #more ignored query_cache_limit = 50M query_cache_size = 210M #on query cache query_cache_type = 1 # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 1 log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log server-id = 1 log-bin = /var/lib/mysql/mysql-bin #replicate-do-db = gate log-bin-index = /var/lib/mysql/mysql-bin.index log-error = /var/lib/mysql/mysql-bin.err relay-log = /var/lib/mysql/relay-bin relay-log-info-file = /var/lib/mysql/relay-bin.info relay-log-index = /var/lib/mysql/relay-bin.index binlog_do_db = 24avia expire_logs_days = 10 max_binlog_size = 100M read_buffer_size = 4024288 innodb_buffer_pool_size = 5000M innodb_flush_log_at_trx_commit = 2 innodb_thread_concurrency = 8 table_definition_cache = 2000 group_concat_max_len = 16M #binlog_do_db = gate #binlog_ignore_db = include_database_name # # * BerkeleyDB # # Using BerkeleyDB is now discouraged as its support will cease in 5.1.12. #skip-bdb # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # You might want to disable InnoDB to shrink the mysqld process by circa 100MB. #skip-innodb # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 500M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 32M key_buffer_size = 512M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ Please, help me make it stable. Memory used /etc/mysql # free total used free shared buffers cached Mem: 32930800 32766424 164376 0 139208 23829196 -/+ buffers/cache: 8798020 24132780 Swap: 33553328 44660 33508668 Maybe my problem not in memory, but MySQL stops every day. As you can see, cache memory free 24 gb. Thank to Michael Hampton? for correction. Load overage on server 3.5. Maybe hdd or another problem? Maybe my config not optimal for 30gb InnoDB ? I'm already try mysqltuner and tunung-primer.sh , but they marked all green. Mysqltuner output mysqltuner >> MySQLTuner 1.0.1 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.24-9-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 112G (Tables: 1528) [--] Data in InnoDB tables: 39G (Tables: 340) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 344 -------- Performance Metrics ------------------------------------------------- [--] Up for: 8h 18m 33s (14M q [478.333 qps], 259K conn, TX: 9B, RX: 5B) [--] Reads / Writes: 84% / 16% [--] Total buffers: 10.5G global + 81.1M per thread (200 max threads) [OK] Maximum possible memory usage: 26.3G (83% of installed RAM) [OK] Slow queries: 1% (259K/14M) [!!] Highest connection usage: 100% (201/200) [OK] Key buffer size / total MyISAM indexes: 1.5G/5.6G [OK] Key buffer hit rate: 100.0% (6B cached / 1M reads) [OK] Query cache efficiency: 74.3% (8M cached / 11M selects) [OK] Query cache prunes per day: 0 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 247K sorts) [!!] Joins performed without indexes: 106025 [!!] Temporary tables created on disk: 49% (351K on disk / 715K total) [OK] Thread cache hit rate: 99% (249 created / 259K connections) [!!] Table cache hit rate: 15% (2K open / 13K opened) [OK] Open file limit used: 15% (3K/20K) [OK] Table locks acquired immediately: 99% (4M immediate / 4M locks) [!!] InnoDB data size / buffer pool: 39.4G/5.9G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce or eliminate persistent connections to reduce connection usage Adjust your join queries to always utilize indexes Temporary table size is already large - reduce result set size Reduce your SELECT DISTINCT queries without LIMIT clauses Increase table_cache gradually to avoid file descriptor limits Variables to adjust: max_connections (> 200) wait_timeout (< 600) interactive_timeout (< 600) join_buffer_size (> 5.0M, or always use indexes with joins) table_cache (> 10000) innodb_buffer_pool_size (>= 39G) Mysql primer output -- MYSQL PERFORMANCE TUNING PRIMER -- - By: Matthew Montgomery - MySQL Version 5.5.24-9-log x86_64 Uptime = 0 days 8 hrs 20 min 50 sec Avg. qps = 478 Total Questions = 14369568 Threads Connected = 16 Warning: Server has not been running for at least 48hrs. It may not be safe to use these recommendations To find out more information on how each of these runtime variables effects performance visit: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html Visit http://www.mysql.com/products/enterprise/advisors.html for info about MySQL's Enterprise Monitoring and Advisory Service SLOW QUERIES The slow query log is enabled. Current long_query_time = 1.000000 sec. You have 260626 out of 14369701 that take longer than 1.000000 sec. to complete Your long_query_time seems to be fine BINARY UPDATE LOG The binary update log is enabled Binlog sync is not enabled, you could loose binlog records during a server crash WORKER THREADS Current thread_cache_size = 50 Current threads_cached = 45 Current threads_per_sec = 0 Historic threads_per_sec = 0 Your thread_cache_size is fine MAX CONNECTIONS Current max_connections = 200 Current threads_connected = 11 Historic max_used_connections = 201 The number of used connections is 100% of the configured maximum. You should raise max_connections INNODB STATUS Current InnoDB index space = 214 M Current InnoDB data space = 39.40 G Current InnoDB buffer pool free = 0 % Current innodb_buffer_pool_size = 5.85 G Depending on how much space your innodb indexes take up it may be safe to increase this value to up to 2 / 3 of total system memory MEMORY USAGE Max Memory Ever Allocated : 23.46 G Configured Max Per-thread Buffers : 15.84 G Configured Max Global Buffers : 7.54 G Configured Max Memory Limit : 23.39 G Physical Memory : 31.40 G Max memory limit seem to be within acceptable norms KEY BUFFER Current MyISAM index space = 5.61 G Current key_buffer_size = 1.47 G Key cache miss rate is 1 : 5578 Key buffer free ratio = 77 % Your key_buffer_size seems to be fine QUERY CACHE Query cache is enabled Current query_cache_size = 200 M Current query_cache_used = 101 M Current query_cache_limit = 50 M Current Query cache Memory fill ratio = 50.59 % Current query_cache_min_res_unit = 4 K MySQL won't cache query results that are larger than query_cache_limit in size SORT OPERATIONS Current sort_buffer_size = 64 M Current read_rnd_buffer_size = 256 K Sort buffer seems to be fine JOINS Current join_buffer_size = 5.00 M You have had 106606 queries where a join could not use an index properly You have had 8 joins without keys that check for key usage after each row join_buffer_size >= 4 M This is not advised You should enable "log-queries-not-using-indexes" Then look for non indexed joins in the slow query log. OPEN FILES LIMIT Current open_files_limit = 20210 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine TABLE CACHE Current table_open_cache = 10000 tables Current table_definition_cache = 2000 tables You have a total of 1910 tables You have 2151 open tables. The table_cache value seems to be fine TEMP TABLES Current max_heap_table_size = 2.92 G Current tmp_table_size = 2.92 G Of 366426 temp tables, 49% were created on disk Perhaps you should increase your tmp_table_size and/or max_heap_table_size to reduce the number of disk-based temporary tables Note! BLOB and TEXT columns are not allow in memory tables. If you are using these columns raising these values might not impact your ratio of on disk temp tables. TABLE SCANS Current read_buffer_size = 3 M Current table scan ratio = 2846 : 1 read_buffer_size seems to be fine TABLE LOCKING Current Lock Wait ratio = 1 : 185 You may benefit from selective use of InnoDB. If you have long running SELECT's against MyISAM tables and perform frequent updates consider setting 'low_priority_updates=1'

    Read the article

  • Setting up DNS server on VPS on the internet

    - by Nick Duffell
    I have followed multiple online tutorials on setting this up, it is BIND9 on a debian server. It is the only server I have, so it is acting as both ns1, ns1, and the server they domain name should point to itself. It all appears to be working and when I dig the domain name from the server itself I get (what seems to me) the correct output: ; << DiG 9.7.3 << theonetekkit.com.au ;; global options: +cmd ;; Got answer: ;; -HEADER<<- opcode: QUERY, status: NOERROR, id: 18593 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;theonetekkit.com.au. IN A ;; ANSWER SECTION: theonetekkit.com.au. 3000 IN A 103.4.17.189 ;; AUTHORITY SECTION: theonetekkit.com.au. 3000 IN NS ns1.theonetekkit.com.au. theonetekkit.com.au. 3000 IN NS ns2.theonetekkit.com.au. ;; ADDITIONAL SECTION: ns1.theonetekkit.com.au. 3000 IN A 103.4.17.189 ns2.theonetekkit.com.au. 3000 IN A 103.4.17.189 ;; Query time: 15 msec ;; SERVER: 103.4.17.189#53(103.4.17.189) ;; WHEN: Wed Nov 7 02:12:58 2012 ;; MSG SIZE rcvd: 121 When I dig it from another server / computer, however, I am getting a problem: ; << DiG 9.7.3 << theonetekkit.com.au ;; global options: +cmd ;; Got answer: ;; -HEADER<<- opcode: QUERY, status: SERVFAIL, id: 56637 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;theonetekkit.com.au. IN A ;; Query time: 22 msec ;; SERVER: 103.4.16.166#53(103.4.16.166) ;; WHEN: Wed Nov 7 02:12:40 2012 ;; MSG SIZE rcvd: 37 I have given it more than enough time for the records to be refreshed since setting up the DNS server, so I don't know what would be causing this. Any ideas? Thanks

    Read the article

  • Sesame Data Browser: filtering, sorting, selecting and linking

    - by Fabrice Marguerie
    I have deferred the post about how Sesame is built in favor of publishing a new update.This new release offers major features such as the ability to quickly filter and sort data, select columns, and create hyperlinks to OData. Filtering, sorting, selecting In order to filter data, you just have to use the filter row, which becomes available when you click on the funnel button: You can then type some text and select an operator: The data grid will be refreshed immediately after you apply a filter. It works in the same way for sorting. Clicking on a column will immediately update the query and refresh the grid.Note that multi-column sorting is possible by using SHIFT-click: Viewing data is not enough. You can also view and copy the query string that returns that data: One more thing you can to shape data is to select which columns are displayed. Simply use the Column Chooser and you'll be done: Again, this will update the data and query string in real time: Linking to Sesame, linking to OData The other main feature of this release is the ability to create hyperlinks to Sesame. That's right, you can ask Sesame to give you a link you can display on a webpage, send in an email, or type in a chat session. You can get a link to a connection: or to a query: You'll note that you can also decide to embed Sesame in a webpage... Here are some sample links created via Sesame: Netflix movies with high ratings, sorted by release year Netflix horror movies from the 21st century Northwind discontinued products with remaining stock Netflix empty connection I'll give more examples in a post to follow. There are many more minor improvements in this release, but I'll let you find out about them by yourself :-)Please try Sesame Data Browser now and let me know what you think! PS: if you use Sesame from the desktop, please use the "Remove this application" command in the context menu of the destkop app and then "Install on desktop" again in your web browser. I'll activate automatic updates with the next release.

    Read the article

  • I thought everyone did it like this – Training Session Code Management

    - by Fatherjack
    One of an occasional series of blogs about things that I do that perhaps others don’t. From very early on in my dealings with SQL Server Management Studio I started using Solutions and Projects. This means that I started using them when writing sessions and it wasn’t until speaking with someone at PASS Summit 2013 that I found out that this was a process that was unheard of by some people. So, here we go, a run through how I create and manage code and other documents that I use in presentations. For people unsure what solutions and projects are; • Solution – a container for one or more projects. • Project – a container for files, .sql files are grouped as Queries, all other files are stored as Misc. How do I start? Open Management Studio as normal, and then click File | New and select Project This will bring up the New Project dialog box and you can select/add details as necessary in the places indicated. If this is the first project you are creating then be sure to select the Create directory for solution check box (4). If know in advance that you are going to have more than one project in the solution then you may want to edit the Solution name (3) as by default it will take the name of the project that you enter at (2). This will lead you to the following folder structure (depending on the location that you chose in 3) above. In SSMS you need to turn on the Solution Explorer, either via the View menu or pressing Ctrl + Alt + L                   This will bring up a dockable window that will let you quickly access the files that you choose to include in the Solution.                     Can we get to work and write some code yet please? Yes, we can. As with many Microsoft products there are several ways to go about this, let’s look at the easiest way when creating new code. When writing a presentation I usually start from the position we are currently in – a brand new solution and project with no code. Later on we will look at incorporating existing code files into the Project where we need it. Right-click on the Project name and choose Add New Query           As soon as you click this you will be prompted to select the sql server that you want to connect to and once you have done that you will have your new query open in the text editor and the Solution Explorer will now look like this, showing your server connection and your new query.               And the Project folder will look like this         Now once you have written your code don’t press save, choose Save As and give the code a better name than QueryX.sql. SSMS will interpret this as a request to rename Query1 and your Project and the Project folder will show that SQLQuery1.sql no longer exists but there is now a file named as you requested. If you happen to click save in error then right-click the query in the project and choose rename.               You can then alter the name as you like, even when open in the SSMS text editor, and the file will be renamed. When creating a set of scripts for a presentation I name files with a numeric prefix so that when they are sorted by name they are in the order that I need to use them during the session. I love this idea but I’ve got loads of existing scripts I want to put in Projects Excellent, adding existing files to a project is easy, let’s consider that you have query files in your My Documents folder and you want to bring them into the Project we have just created. Right-click on the Project and choose Add | Existing Item           Navigate to the location of your chosen file and select it. The file will open in SSMS text editor and the Project will be updated to show that the selected query is now part of your project. If you look in Windows Explorer you will see that the query file has been copied into the Project folder, the original file still remains in your My Documents (or wherever it existed). I’ll leave it as an exercise for the reader to explore creating further Projects within a solution but will happily answer questions if you get into difficulties. What other advantages do I get from this? Well, as all your code is neatly in one Solution folder and the folder contains only files that are pertinent to the session you are presenting then it makes it very easy to share this code, simply copy the whole folder onto a USB stick, Blog, FTP location, wherever you choose and it’s all there in one self-contained parcel. You don’t have to limit yourself to .sql query files, you can add any sort of document via the Add Existing Item method, just try it out. Right-click on the protect and choose Add | Existing Item           Change the file type filter.                       You can multi select items here using Ctrl as you click each item you want. When you are done, click the Add button and the items will be brought into your project.                 Again, using this process means the files are copied into the project folder, leaving you original files untouched in their original location. Once they are here you can double click them in the SSMS Solution Explorer to open them, for files with a specific file type then the appropriate application will be launched – ie Word, Excel etc. However, if the files are something that the SSMS Text editor can display then they will open in a tab in SSMS. Try it out with a text file or even a PS1 file … This sounds excellent but what do I need to watch out for? One big thing to consider when working like this is the version of SSMS that you are using. There is something fundamentally different between the different versions in the way that the project (.ssmssqlproj) and solution (.sqlsuo and .ssmssln) files are formatted. If you create a solution in an older version of SSMS and then open it in a newer version you will be given the option to upgrade it. Once you do this upgrade then the older version of SSMS will not be able to open the solution any more. Now this ranks as more of an annoyance than disaster as the files within the projects are not affected in any way, you would just have to delete the files mentioned and recreate the solution in the older version again. Summary So, here we have seen how using SSMS Projects and Solutions can help keep related code files (and other document types) together in a neat structure so that they can be quickly navigated during a presentation and it also makes it incredibly simple to distribute your code and share it with others. I hope this is of use to you and helps you bring more order into your sql files, whether you are a person that does technical presentations or not, having your code grouped and managed can make for a lot of advantages as your code library expands.  

    Read the article

  • Excessive CPU Utilization for Bind 9.8.1 `named` processes

    - by justinzane
    I just noticed that named is eating vast amounts of CPU time for a very small network with only a few domains. Can someone help me determine what is misconfigured, please? Or how to debug this. top top - 14:13:08 up 25 days, 14:16, 1 user, load average: 1.04, 1.04, 1.05 Tasks: 149 total, 1 running, 148 sleeping, 0 stopped, 0 zombie %Cpu(s): 17.3 us, 4.3 sy, 0.0 ni, 78.2 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem: 2042776 total, 1347916 used, 694860 free, 249396 buffers KiB Swap: 3976080 total, 30552 used, 3945528 free, 574164 cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 17445 bind 20 0 244m 42m 3124 S 99.4 2.2 2345:03 named rndc stats +++ Statistics Dump +++ (1352931389) ++ Incoming Requests ++ 65869 QUERY ++ Incoming Queries ++ 31809 A 241 NS 3 CNAME 27455 SOA 276 PTR 123 MX 462 TXT 5400 AAAA 7 A6 1 DS 14 DNSKEY 15 SPF 55 AXFR 8 ANY ++ Outgoing Queries ++ [View: internal] 22206 A 509 NS 10 SOA 25 PTR 12 MX 524 TXT 4851 AAAA 62 DNSKEY 19 SPF 3157 DLV [View: external] 87 A 2 NS 80 AAAA 120 DNSKEY 7 DLV [View: _bind] ++ Name Server Statistics ++ 65869 IPv4 requests received 27670 requests with EDNS(0) received 112 TCP requests received 65652 responses sent 20 truncated responses sent 27670 responses with EDNS(0) sent 62920 queries resulted in successful answer 37117 queries resulted in authoritative answer 28482 queries resulted in non authoritative answer 7 queries resulted in referral answer 591 queries resulted in nxrrset 53 queries resulted in SERVFAIL 2081 queries resulted in NXDOMAIN 14530 queries caused recursion 162 duplicate queries received 55 requested transfers completed ++ Zone Maintenance Statistics ++ 109536 IPv4 notifies sent ++ Resolver Statistics ++ [Common] [View: internal] 29362 IPv4 queries sent 2013 IPv6 queries sent 28531 IPv4 responses received 4209 NXDOMAIN received 6 SERVFAIL received 31 FORMERR received 32 EDNS(0) query failures 3359 query retries 836 query timeouts 5348 IPv4 NS address fetches 3271 IPv6 NS address fetches 83 IPv4 NS address fetch failed 2779 IPv6 NS address fetch failed 17421 DNSSEC validation attempted 12731 DNSSEC validation succeeded 4690 DNSSEC NX validation succeeded 21104 queries with RTT 10-100ms 7418 queries with RTT 100-500ms 3 queries with RTT 500-800ms 1 queries with RTT 800-1600ms [View: external] 192 IPv4 queries sent 104 IPv6 queries sent 192 IPv4 responses received 2 NXDOMAIN received 104 query retries 44 IPv4 NS address fetches 44 IPv6 NS address fetches 1 IPv4 NS address fetch failed 1 IPv6 NS address fetch failed 4 DNSSEC validation attempted 3 DNSSEC validation succeeded 1 DNSSEC NX validation succeeded 152 queries with RTT 10-100ms 40 queries with RTT 100-500ms [View: _bind] ++ Cache DB RRsets ++ [View: internal (Cache: internal)] 2007 A 652 NS 131 CNAME 1 MX 32 TXT 421 AAAA 28 DS 244 RRSIG 110 NSEC 3 DNSKEY 2 !A 2 !TXT 89 !AAAA 2 !SPF 14 !DLV 148 NXDOMAIN [View: external (Cache: external)] 55 A 12 NS 34 AAAA 2 DS 10 RRSIG 1 DNSKEY [View: _bind (Cache: _bind)] ++ Socket I/O Statistics ++ 82958 UDP/IPv4 sockets opened 2118 UDP/IPv6 sockets opened 4 TCP/IPv4 sockets opened 1 TCP/IPv6 sockets opened 82956 UDP/IPv4 sockets closed 2117 UDP/IPv6 sockets closed 58 TCP/IPv4 sockets closed 15 UDP/IPv4 socket bind failures 2117 UDP/IPv6 socket connect failures 29554 UDP/IPv4 connections established 59 TCP/IPv4 connections accepted 2117 UDP/IPv6 send errors 5 UDP/IPv4 recv errors ++ Per Zone Query Statistics ++ --- Statistics Dump --- (1352931389)

    Read the article

  • POST Fail via AJAX Request?

    - by Jascha
    I can't for the life of me figure out why this is happening. This is kind of a repost (submitted to stackoverflow, but maybe a server issue?). I am running a javascript log out function called logOut() that has make a jQuery ajax call to a php script... function logOut(){ var data = new Object; data.log_out = true; $.ajax({ type: 'POST', url: 'http://www.mydomain.com/functions.php', data: data, success: function() { alert('done'); } }); } the php function it calls is here: if(isset($_POST['log_out'])){ $query = "INSERT INTO `token_manager` (`ip_address`) VALUES('logOutSuccess')"; $connection->runQuery($query); // <-- my own database class... // omitted code that clears session etc... die(); } Now, 18 hours out of the day this works, but for some reason, every once in a while, the POST data will not trigger my query. (this will last about an hour or so). I figured out the post data is not being set by adding this at the end of my script... $query = "INSERT INTO `token_manager` (`ip_address`) VALUES('POST FAIL')"; $connection->runQuery($query); So, now I know for certain my log out function is being skipped because in my database is the following data: if it were NOT being skipped, my data would show up like this: I know it is being skipped for two reasons, one the die() at the end of my first function, and two, if it were a success a "logOutSuccess" would be registered in the table. Any thoughts? One friend says it's a janky hosting company (hostgator.com). I personally like them because they are cheap and I'm a fan of cpanel. But, if that's the case??? Thanks in advance. -J

    Read the article

  • Oracle SQL Developer: Fetching SQL Statement Result Sets

    - by thatjeffsmith
    Running queries, browsing tables – you are often faced with many thousands, if not millions, of rows. Most people are happy with looking at the first few rows. But occasionally you need to see more. SQL Developer doesn’t show you all records, all at once. Instead, it brings the records down in ‘chunks,’ or as-needed. How It Works There is a preference that tells SQL Developer how many records to get in a single request, or ‘fetch’ of records. The default is 50… So if I run a query that returns MORE than 50 rows: There’s more than 50 records in this resultset, but we have 50 in the grid to start with. We don’t know how many records are in this result set actually. To show the record count here, we actually go physically query the database with a row count type query. All we know is that the query has finished executing, and that there are rows available to go fetch. It tells us when it’s done. As you scroll through the grid, if you get to record 50 and scroll more, we’ll get 50 more records. Or, you can cheat to get to the ‘bottom’ of the result set. You can ask SQL Developer to just to get all the records at once… Once all the records have been fetched, you’ll see this: All rows fetched! A word of caution There’s a reason we have the default set to 50 and not 1000. Bringing back data can get expensive and heavy. We’ve found the best performance to be found in that 50 to 200 record range.

    Read the article

  • Filtering Security Logs by User and Logon Type

    - by Trido
    I have been asked to find out when a user has logged on to the system in the last week. Now the audit logs in Windows should contain all the info I need. I think if I search for Event ID 4624 (Logon Success) with a specific AD user and Logon Type 2 (Interactive Logon) that it should give me the information I need, but for the life of my I cannot figure out how to actually filter the Event Log to get this information. Is it possible inside of the Event Viewer or do you need to use an external tool to parse it to this level? I found http://nerdsknowbest.blogspot.com.au/2013/03/filter-security-event-logs-by-user-in.html which seemed to be part of what I needed. I modified it slightly to only give me the last 7 days worth. Below is the XML I tried. <QueryList> <Query Id="0" Path="Security"> <Select Path="Security">*[System[(EventID=4624) and TimeCreated[timediff(@SystemTime) &lt;= 604800000]]]</Select> <Select Path="Security">*[EventData[Data[@Name='Logon Type']='2']]</Select> <Select Path="Security">*[EventData[Data[@Name='subjectUsername']='Domain\Username']]</Select> </Query> </QueryList> It only gave me the last 7 days, but the rest of it did not work. Can anyone assist me with this? EDIT Thanks to the suggestions of Lucky Luke I have been making progress. The below is my current query, although as I will explain it isn't returning any results. <QueryList> <Query Id="0" Path="Security"> <Select Path="Security"> *[System[(EventID='4624')] and System[TimeCreated[timediff(@SystemTime) &lt;= 604800000]] and EventData[Data[@Name='TargetUserName']='john.doe'] and EventData[Data[@Name='LogonType']='2'] ] </Select> </Query> </QueryList> As I mentioned, it wasn't returning any results so I have been messing with it a bit. I can get it to produce the results correctly until I add in the LogonType line. After that, it returns no results. Any idea why this might be? EDIT 2 I updated the LogonType line to the following: EventData[Data[@Name='LogonType'] and (Data='2' or Data='7')] This should capture Workstation Logons as well as Workstation Unlocks, but I still get nothing. I then modify it to search for other Logon Types like 3, or 8 which it finds plenty of. This leads me to believe that the query works correctly, but for some reason there are no entries in the Event Logs with Logon Type equalling 2 and this makes no sense to me. Is it possible to turn this off?

    Read the article

  • possible UDP attack on BIND?

    - by Waleed Hamra
    hello everyone, i was surprised last month when my EC2 instance (ubuntu precise server), that is supposed to be under the free tier still, accumulated lots of traffic... today, while checking my current billing statement, i noticed i already have tons of traffic, while still in the middle of the month, and i'm fearing what my bill by the end of the month is going to be... i installed bandwidthd, and after few minutes, i noticed lots of UDP traffic to "108.162.233.15". this is apparently a cloudflare IP, and i don't have anything using cloudflare (as far as i know). so i ran "iftop" to see what ports are being used, and i saw the UDP traffic coming from port 80 to my port 53... why would a webserver query dns? so i stopped bind on my server, and ran it in foreground debugging mode, and saw the following query, being repeated continuously: 17-Nov-2012 12:30:58.216 client 108.162.233.15#80: UDP request 17-Nov-2012 12:30:58.216 client 108.162.233.15#80: request is not signed 17-Nov-2012 12:30:58.216 client 108.162.233.15#80: recursion available 17-Nov-2012 12:30:58.216 client 108.162.233.15#80: query 17-Nov-2012 12:30:58.216 client 108.162.233.15#80: query (cache) 'isc.org/ANY/IN' approved 17-Nov-2012 12:30:58.216 client 108.162.233.15#80: send 17-Nov-2012 12:30:58.216 client 108.162.233.15#80: sendto 17-Nov-2012 12:30:58.216 client 108.162.233.15#80: senddone 17-Nov-2012 12:30:58.217 client 108.162.233.15#80: next 17-Nov-2012 12:30:58.217 client 108.162.233.15#80: endrequest 17-Nov-2012 12:30:58.217 client @0x7fbee05126e0: udprecv 17-Nov-2012 12:30:58.343 client 108.162.233.15#80: UDP request 17-Nov-2012 12:30:58.343 client 108.162.233.15#80: request is not signed 17-Nov-2012 12:30:58.343 client 108.162.233.15#80: recursion available 17-Nov-2012 12:30:58.343 client 108.162.233.15#80: query 17-Nov-2012 12:30:58.343 client 108.162.233.15#80: query (cache) 'isc.org/ANY/IN' approved 17-Nov-2012 12:30:58.343 client 108.162.233.15#80: send 17-Nov-2012 12:30:58.344 client 108.162.233.15#80: sendto 17-Nov-2012 12:30:58.344 client 108.162.233.15#80: senddone 17-Nov-2012 12:30:58.344 client 108.162.233.15#80: next 17-Nov-2012 12:30:58.344 client 108.162.233.15#80: endrequest my question is... is this normal? should i be worried? or is this completely irrelevant to my data charges, and i should wait to see more data from bandwidthd? thank you in advance.

    Read the article

  • DNSSEC - Ad Flag not activated

    - by Arancha
    Hi all, I have some doubts regarding DNSSEC. I have one server acting as an Authoritative Name Server and another one as a Cache/Resolver. I'm using Bind 9.7.1-P2 and these are my configuration files: Named.conf (Authoritative Server) // Opciones de configuracion del servidor include "/etc/rndc.key"; controls { inet 127.0.0.1 allow { localhost; } keys { rndc-key; }; }; options{ version "Peticion no permitida/Query not allowed"; hostname "Peticion no permitida/Query not allowed"; server-id "Peticion no permitida/Query not allowed"; directory "/etc/DNS_RIMA"; pid-file "named.pid"; notify yes; #files 65535; dnssec-enable yes; dnssec-validation yes; allow-transfer { 172.23.2.37; 172.23.3.39; }; transfer-format many-answers; transfers-per-ns 5; transfers-in 10; max-transfer-time-in 120; check-names master ignore; listen-on {172.23.2.57; 80.58.102.13; 80.58.102.103; 127.0.0.1; }; }; zone "test.dnssec" { type master; key-directory "keys"; file "db.test.dnssec.signed"; also-notify { 172.23.2.37 ; 172.23.3.39 ; }; allow-transfer { 172.23.2.37 ; 172.23.3.39 ; }; }; test.dnssec zone test.dnssec. 86400 IN SOA ns.test.dnssec. mxadmin.test.dnssec. ( 2010090902 ; serial 21600 ; refresh (6 hours) 3600 ; retry (1 hour) 1814400 ; expire (3 weeks) 172800 ; minimum (2 days) ) 86400 RRSIG SOA 5 2 86400 20101009062248 ( 20100909062248 40665 test.dnssec. eY99laB6PrtETaXLdCS+G8Uq1lIK7d5vxUB1 pAQ9npv/YbvX1pdWZKGojDgPGw8V65Q0zKQo YW1VuBzvwfSRKax+yrjJzvHQGfCZPJWARehK hgLxHOfXLVH7tyndvLD49ZKcWtrop+Tuy4n9 apWWfSJZxCOngwS7zUi0zCTKfPs= ) 86400 NS ns1.test.dnssec. 86400 RRSIG NS 5 2 86400 20101009062248 ( 20100909062248 40665 test.dnssec. lmlP/Mb2qEXPSlajgSDn/CqWk/jokVCmqjeo idNuytxbiFnbCOunzvaYpgvDpEr0CPrwXaDL TSnb/w53tZl7GHRImJo50vwwNZljLzNT6CFw aaQXFc3rDLsXjCi+WF0/Z7meteM4jYdx5nrV Qx9pgur7VPbP88bJOqWCPBev2Ho= ) 172800 NSEC a.test.dnssec. NS SOA RRSIG NSEC DNSKEY 172800 RRSIG NSEC 5 2 172800 20101009062248 ( 20100909062248 40665 test.dnssec. E76ayamsAAz8Zcj7060KY0nTFzHPztM/Pkc5 OM0EcP7C5+ocn4L8M2J0rmR3jxfYvCpOk0BQ Zniqn9Aw41Qk068yJ2dfDPwV5zT0+te0nzwC /awJGPMXLzMj4JejYTlTiKfspGDJCG44F+lb lHXdcUhbjXf3loqMQadZFQ/eSn0= ) 86400 DNSKEY 256 3 5 ( AwEAAbQ8qrNN5vetx/7E1VOgXZ7fLqwG1y/i 55hWGCeLbcS95ratT9A6UospOvPSwPTlrFgF RWP67Pubzbsy7/damS1F1+p4GgBQway52Hd1 8HjdHKKC6kIxna9pOJBRfhCdzAsv9LnpRvrw mDpcFAqhdn5k5RqwcUF1eOZrKjxXjAOr ) ; key id = 40665 86400 DNSKEY 257 3 5 ( AwEAAcd4dxWyTgOuqha0DJADUH0pk5jvnwdM ZhgZaqnayUdeTh8U9WOjOUHdVCGywZS6NTVp xXqhcegWzh2ZR5VN6thuhezt7kbzLNWbPe7m YF29/ZTXB6nmdSxruQlSvYhzkWTaPNtfrUnI UlbDRxUFWQkSHj9LA1TG76FpR6uqOj1sNrWX nPb/Hwp1Sb2Ik4FlifKb/Vu1+/UnclRJgfPm p2HGTeNYpfk15JHBPSYxJ1TuedXQIdkPGlQX ISmAeV1evGomCC/x9DNleDHCszJOptwurzRP Z7wRXcWnbXz1BU8rAqvUZL3M4UgdNRR5LLTz CkRnrlvXYJpgzDtgmQxE9Bs= ) ; key id = 59647 86400 RRSIG DNSKEY 5 2 86400 20101009062248 ( 20100909062248 40665 test.dnssec. sa4W3tvl6n0TkIcq3xzhG17C2O0lRhllrpUd n5Hs6yVo8r7stewP6tm2XscQiAeseDgmv28w s6Mtiz8uPUbrgFRb6SJk7coH2n/2Y3//S9YP NldDFv3luPnnU1TBb3jDsBKIZWHU9yl/cLNA OKUhlMDd40txk+fQi3iiV5Ls9K8= ) 86400 RRSIG DNSKEY 5 2 86400 20101009062248 ( 20100909062248 59647 test.dnssec. b5fz0dEp2co2pVO7biY896XmsJanjQIR69vC MvSF104/9iZk6eGVFi6hsa4aZcXutEjUDESB ynPkDjMWWIIhN6K1jYKGIc/sFKv1IUONRYHF KXGgZhC6aI0B1E4NA9AXLjlBVF60nHdc3iw8 5gTLDjypP3qAZrnzMvdiBopLnVdB25UZYKn8 mGpOuzKqX02TGMCFMlEVtMX4FP/XKAE8UjiQ 5ehC1JvIKIyg/2zM+ot3nmcqqtUfzp/Hweyc aIkl/9wPJPwMedfTqOjfUKFdB+GiZ0Zz16HZ 5MfJui5IGh5Y6Q04kMrnap2V5U7mByTzx/ud V/eFYhmSHGtAXzBjMA== ) a.test.dnssec. 86400 IN A 1.1.1.1 86400 RRSIG A 5 3 86400 20101009062248 ( 20100909062248 40665 test.dnssec. P52N9ypCrYsgS4CFcUmII0xjyE6KNL9ndhzH oU63fHJHQHeQV+fc0Rx8cCmZSzuqk1lSBelV 3Gcl9UNNuCAQ4ORQ/yJkiZ1zn7h93Mep9qsg YEUQJMfk4FLjYW67DHNcuoCnKbDJhZS0ndVf I474k7ZEZJsGslwk/vcIoFnTa4o= ) 172800 NSEC b.test.dnssec. A RRSIG NSEC 172800 RRSIG NSEC 5 3 172800 20101009062248 ( 20100909062248 40665 test.dnssec. TCduf7xPSrWvEAzBO7Kx5haR85yA/lbsswkQ v0QxlskqAqo+9YedGQV+wGblbCIOmkomrYcq u/rXQ5yoQ3SDXd/bw6EFdoQmH8UJOjMc7SdR xY93MjawPB6XXlJsSlbBFPWJwEpILVRhdBFX czdS5VCa1KmhAYZYQp1FY9rMelA= ) b.test.dnssec. 86400 IN A 2.2.2.2 86400 RRSIG A 5 3 86400 20101009062248 ( 20100909062248 40665 test.dnssec. f0M6Tcqe6B09ctaN3BGAit4u4cJE8x3Ik8sh gyMu0GN/lMv/Bo7PB6hgylLam3HXtF1pPAzX oYudXmhU8afPapHMXfUitC1lFQB5ZW052ZC7 JXV9MnGULydz1blj2EdN+JL3Za8SJKM0LrLB XdQ+QUV+A/6N7hUV6usz5YmdBeI= ) 172800 NSEC ns1.test.dnssec. A RRSIG NSEC 172800 RRSIG NSEC 5 3 172800 20101009062248 ( 20100909062248 40665 test.dnssec. sc6v19dcOFVa295/Xf1pKxBhbdpEErY8CTDQ fw2fjJf0Y3wL1Y1Mlr5zi5ShceQwgua+6YHE DWNbAPcXrJ0lLMU4DU5r0sAyBiBCgCavngGk i59W+nv11zuIpPMnlaMHpJVfJrQ+c4z7H9MH 77B0fMRFTUnvAXoq6ag8Q5POITI= ) ns1.test.dnssec. 86400 IN A 3.3.3.3 86400 RRSIG A 5 3 86400 20101009062248 ( 20100909062248 40665 test.dnssec. UQ3hR/++ta1GokxGz8Yh+GomMcA+xhd3z2Ke z0tdFiNfxvGbm85XyCtSqJIo2S/ZLVJUv/mG nGJbicTfJSziKzYZsD7dp0WJiUK3l7lQ/HpP 5FL8SbjlovVYYAG5woW4p3+os28mmCAJA8gP JTywbcREEhFB4cir2M/QVP+9h+Y= ) 172800 NSEC test.dnssec. A RRSIG NSEC 172800 RRSIG NSEC 5 3 172800 20101009062248 ( 20100909062248 40665 test.dnssec. i7F/ezGl/pGXCC6JyVDaxuwdZMAgv9QLxwzi PTgjCG8Sj6pTIxaQkSLwXsoB9gF77WWBANow R2SWdz0Zai2vWnv/NYoNm9ZfRJEQ9NuExeYp rvX/+lLOHvZXN6tUerIQbWAxO2GwdzHoejSn wReUNVr9MxzZUvuJ33Z7X/7s9VQ= ) Named.conf (Cache/Resolver) include "/etc/rndc.key"; controls { inet 127.0.0.1 allow { localhost; } keys { rndc-key; }; }; options{ version "Peticion no permitida/Query not allowed"; hostname "Peticion no permitida/Query not allowed"; server-id "Peticion no permitida/Query not allowed"; directory "/etc/DNS_RIMA"; pid-file "named.pid"; recursion yes; notify no; #DNSSEC dnssec-enable yes; dnssec-validation yes; listen-on {127.0.0.1; 172.23.2.87; 80.58.102.37; 80.58.102.115; }; #listen-on {127.0.0.1; 80.58.102.37; 80.58.102.115; }; allow-query { telefonica; }; allow-transfer { none; }; recursive-clients 40000; max-cache-size 838860800; rrset-order { order fixed;}; max-ncache-ttl 600; }; trusted-keys { "test.dnssec." 257 3 5 "AwEAAcd4dxWyTgOuqha0DJADUH0pk5jvnwdMZhgZaqnayUdeTh8U9WOjOUHdVCGywZS6NTVpxXqhcegWzh2ZR5VN6thuhezt7kbzLNWbPe7mYF29/ZT XB6nmdSxruQlSvYhzkWTaPNtfrUnIUlbDRxUFWQkSHj9LA1TG76FpR6uqOj1sNrWXnPb/Hwp1Sb2Ik4FlifKb/Vu1+/UnclRJgfPmp2HGTeNYpfk15JHBPSYxJ1TuedXQIdkPGlQXIS mAeV1evGomCC/x9DNleDHCszJOptwurzRPZ7wRXcWnbXz1BU8rAqvUZL3M4UgdNRR5LLTzCkRnrlvXYJpgzDtgmQxE9Bs="; }; I have configured a secure zone (test.dnssec) and I'm trying to perform some queries from the resolver to the Name server (172.23.2.57): /usr/local/bin/dig @172.23.2.57 a.test.dnssec +dnssec ; <<>> DiG 9.7.1-P2 <<>> @172.23.2.57 a.test.dnssec +dnssec ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 2654 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 2, ADDITIONAL: 3 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags: do; udp: 4096 ;; QUESTION SECTION: ;a.test.dnssec. IN A ;; ANSWER SECTION: a.test.dnssec. 86400 IN A 1.1.1.1 a.test.dnssec. 86400 IN RRSIG A 5 3 86400 20101009062248 20100909062248 40665 test.dnssec. P52N9ypCrYsgS4CFcUmII0xjyE6KNL9ndhzHoU63fHJHQHeQV+ fc0Rx8 cCmZSzuqk1lSBelV3Gcl9UNNuCAQ4ORQ/yJkiZ1zn7h93Mep9qsgYEUQ JMfk4FLjYW67DHNcuoCnKbDJhZS0ndVfI474k7ZEZJsGslwk/vcIoFnT a4o= ;; AUTHORITY SECTION: test.dnssec. 86400 IN NS ns1.test.dnssec. test.dnssec. 86400 IN RRSIG NS 5 2 86400 20101009062248 20100909062248 40665 test.dnssec. lmlP/Mb2qEXPSlajgSDn/CqWk/jokVCmqjeoidNuytxbiFnbCOunzvaY pgvDpEr0CPrwXaDLTSnb/w53tZl7GHRImJo50vwwNZljLzNT6CFwaaQX Fc3rDLsXjCi+WF0/Z7meteM4jYdx5nrVQx9pgur7VPbP88bJOqWCPBev 2Ho= ;; ADDITIONAL SECTION: ns1.test.dnssec. 86400 IN A 3.3.3.3 ns1.test.dnssec. 86400 IN RRSIG A 5 3 86400 20101009062248 20100909062248 40665 test.dnssec. UQ3hR/++ta1GokxGz8Yh+GomMcA+xhd3z2Kez0tdFiNfxvGbm85XyCtS qJIo2S/ZLVJUv/mGnGJbicTfJSziKzYZsD7dp0WJiUK3l7lQ/HpP5FL8 SbjlovVYYAG5woW4p3+os28mmCAJA8gPJTywbcREEhFB4cir2M/QVP+9 h+Y= ;; Query time: 1 msec ;; SERVER: 172.23.2.57#53(172.23.2.57) ;; WHEN: Thu Sep 9 09:47:14 2010 ;; MSG SIZE rcvd: 605 I obtain the right answer along with the RRSIG records, but the problem is that I'm not seeing the ad flag activated. Any idea about what is wrong????

    Read the article

  • Weird DNS bug - external server resolves to internal IP

    - by emilecantin
    I have a server that is hosted by my university. I have root access, but no control over network setup, firewall, etc. This server's DNS resolves to an internal IP here on campus (10.x.x.x), and an external IP outside campus. I also have a few servers hosted at Amazon, and they mostly work well. However, one of them started to resolve the university server by its internal IP address. This causes problems, as 10.x.x.x on Amazon EC2 is someone else. I have connected to the Amazon server with SSH agent forwarding a few times in the past, to access a Git repository on the university server. Any idea what could cause this? EDIT: Here's my /etc/resolv.conf # Generated by dhcpcd for interface eth0 search ec2.internal nameserver 172.16.0.23 Here's the output of dig myserver.myuniversity.ca.: ; <<>> DiG 9.8.1-P1 <<>> myserver.myuniversity.ca. ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 34470 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;myserver.myuniversity.ca. IN A ;; ANSWER SECTION: myserver.myuniversity.ca. 537586 IN A 10.43.x.x ;; Query time: 2 msec ;; SERVER: 172.16.0.23#53(172.16.0.23) ;; WHEN: Wed Nov 28 16:07:21 2012 ;; MSG SIZE rcvd: 60 Here's the expected output (on another Amazon server): ; <<>> DiG 9.8.1-P1 <<>> myserver.myuniversity.ca. ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 8045 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;myserver.myuniversity.ca. IN A ;; ANSWER SECTION: myserver.myuniversity.ca. 601733 IN A x.x.239.1 ;; Query time: 1 msec ;; SERVER: 172.16.0.23#53(172.16.0.23) ;; WHEN: Wed Nov 28 16:09:36 2012 ;; MSG SIZE rcvd: 60

    Read the article

  • Nodes inside Cisco VPN. Incoming SSH requests allowed. But can't initiate an outbound SSH.

    - by Douglas Peter
    I've a gateway-to-gateway VPN setup between my Linksys RV042 router and a Cisco VPN. I am able to SSH into any of the machine inside the VPN from my network. But none of the machines inside the VPN can initiate an SSH into my network. It seems they've blocked even all ping requests to my network gateway. This is the requirement: I have scripts that SSH into the machines inside the VPN and run a long mysql query. The query generates an output to a file. The time that these queries take is variable. So I have a loop in my machine that periodically SSHes into the VPN machine and checks if the query has finished, and pulls the generated file using SCP. I need to simplify it thus: The script will run at the machine inside the VPN, and when the query completes, it will SSH into my machine and pushes the generated file. Thanks for any ideas.

    Read the article

  • Splunk is fantastically expensive: What are the alternatives? [closed]

    - by samsmith
    Possible Duplicate: Alternatives to Splunk? This has been discussed, but it has been several months, so it may be time to revisit it: Earlier discussion RE Splunk alternatives For the record, Splunk rocks. But the pricing is simply beyond what we can consider (When I spoke with Splunk today, the cost for a system to index 5gb/day of data is over $30,000.) That is more than we spend on SQL Server (by a large multiple), more than we spend on a rack of servers (by a multiple), etc. etc. The splunk sales team is correct (that for $30K we get more value and functionality than if we spend the same building our own system), but it doesn't matter. The splunk cost is simply too high (by a multiple). Soooooo, we are looking around! Is anyone out there building a splunk like system? Our basic need: Able to listen for syslog messages on multiple udp ports Able to index the incoming data in an async way Some kind of search engine Some kind of UI An API to the search engine (to embed in our console) We currently need to index 3-5gb/day, but need to be able to scale to 10gb/day or more. We do not need a lot of history (30 days is fine). We use Windows 2008 and 2003 servers. Thanks for your thoughts! UPDATE: We spent two weeks researching commercial and open source options. Our conclusion: Write our own (we are a software company... we know how to write things). We built a great system built on mongodb and .NET that gives us the functions we needed from MongoDB in about one engineering week. We have now completed our implementation. We use two Mongodb servers (master and slave), and are able to log and index any amount of log data (5gb/day, 15gb/day, etc), limited only by disk space. OBSERVATIONS: This space needs a solid solution that is $1000-3000 flat rate. The licensing models used by the commercial firms are based on a "milk the data center ops guys" models. That is their right (of course!), but it leaves a HUGE space open for someone to come in underneath them. My guess is that in another year or two there will be a good open source solution that will be really usable. Thank you all for your input (even if it was self promotion).

    Read the article

  • How to log error queries in mysql?

    - by Kaizoku
    I know that there is general_log that logs all queries, but I want to find out which query has an error, and get the error message. I have tried running an error query on purpose, but it logs as a normal query and doesn't report it with error. Any ideas?

    Read the article

  • How to run a service as a user who can't delete or update or create a file

    - by neeraj
    Mongodb is a web based console to try out Mongodb. I have created something similar to try out nodejs. In nodejs I am accepting user input and then I am performing eval on that command. Given the power of nodejs , someone from web console can create a file, delete files on the system or could execute 'rm -rf '. I was thinking will it be okay if I run node as a user called node. This user node will not have any privilege to write anything, create anything or update anything. The only access this user will have is read access. Will that work or that is too much of risk. What is a good strategy to handle such a situation?

    Read the article

  • new project; entire node.js app

    - by Jared
    I have been looking into Node.js, express and Nowjs and love how easy it is to have real time interactions between clients. My background is mostly from CodeIgniter MVC using PHP and MYSql. I want to re make a current web project of mine from scratch to make everything better and more real time with this newer technology. After researching and doing test examples I want to use node.js , express and Nowjs for the real time interactions once someone connects to the socket.io to pull data back to clients. But use Code Igniter for the control of the site and user management , possible shopping cart/store , pretty much everything else. This is purely due to time constraints and that I am already familiar with doing it that way. I have been looking at MongoDB as an alternative to MySql, Basically the app is going to be multiple chat rooms all on one page. with the ability of notifications and private messaging. Lots of data transfer and images. before I started piecing it together I wanted to get people who have already done something similar. My model would use Code Igniter and MySQL to render the page and then connect them onto a node.js server and broadcast using express and nowjs would using a mongoDB be better than mySQL for tons of messages and data being stored or MYSQL? Also does it make since to not make the whole site on Node.js , kinda piece it together like that? I was asked to re post this somewhere else as it was not up to the format for SO, OP from here http://stackoverflow.com/questions/12649469/new-project-need-some-start-up-advice-node-js-app#comment17062924_12649469

    Read the article

  • n00b needs some PHP syntax guidance [closed]

    - by Michael
    If you look at http://www.cruc.es/?paged=12/ and go to the bottom of the page you'll see the bottom navigation with the next and previous options. I've been able to make the page numbers work by changing page to paged= in the code. I don't know enough about PHP to get the previous/next options to work. Any advice would be appreciated and I've pasted the code below. Thank you: n00b if ( $query->found_posts > $query->query_vars["posts_per_page"] ) { echo '<ul class="paging">'; // Previous link? if ( $page > 1 ) { echo '<li class="previous"><a href="'.$baseURL.'/page/'.($page-1).'/'.$qs.'">previous</a></li>'; } // Loop through pages for ( $i=1; $i <= $query->max_num_pages; $i++ ) { // Current page or linked page? if ( $i == $page ) { echo '<li class="active">'.$i.'</li>'; } else { echo '<li><a href="'.$baseURL.'/?paged='.$i.'/'.$qs.'">'.$i.'</a></li>'; } } // Next link? if ( $page < $query->max_num_pages ) { echo '<li><a href="'.$baseURL.'/page/'.($page+1).'/'.$qs.'">next</a></li>'; } echo '</ul>'; }

    Read the article

  • Use CompiledQuery.Compile to improve LINQ to SQL performance

    - by Michael Freidgeim
    After reading DLinq (Linq to SQL) Performance and in particular Part 4  I had a few questions. If CompiledQuery.Compile gives so much benefits, why not to do it for all Linq To Sql queries? Is any essential disadvantages of compiling all select queries? What are conditions, when compiling makes whose performance, for how much percentage? World be good to have default on application config level or on DBML level to specify are all select queries to be compiled? And the same questions about Entity Framework CompiledQuery Class. However in comments I’ve found answer  of the author ricom 6 Jul 2007 3:08 AM Compiling the query makes it durable. There is no need for this, nor is there any desire, unless you intend to run that same query many times. SQL provides regular select statements, prepared select statements, and stored procedures for a reason.  Linq now has analogs. Also from 10 Tips to Improve your LINQ to SQL Application Performance   If you are using CompiledQuery make sure that you are using it more than once as it is more costly than normal querying for the first time. The resulting function coming as a CompiledQuery is an object, having the SQL statement and the delegate to apply it.  And your delegate has the ability to replace the variables (or parameters) in the resulting query. However I feel that many developers are not informed enough about benefits of Compile. I think that tools like FxCop and Resharper should check the queries  and suggest if compiling is recommended. Related Articles for LINQ to SQL: MSDN How to: Store and Reuse Queries (LINQ to SQL) 10 Tips to Improve your LINQ to SQL Application Performance Related Articles for Entity Framework: MSDN: CompiledQuery Class Exploring the Performance of the ADO.NET Entity Framework - Part 1 Exploring the Performance of the ADO.NET Entity Framework – Part 2 ADO.NET Entity Framework 4.0: Making it fast through Compiled Query

    Read the article

  • New features in SQL Prompt 6.4

    - by Tom Crossman
    We’re pleased to announce a new beta version of SQL Prompt. We’ve been trying out a few new core technologies, and used them to add features and bug fixes suggested by users on the SQL Prompt forum and suggestions forum. You can download the SQL Prompt 6.4 beta here (zip file). Let us know what you think! New features Execute current statement In a query window, you can now execute the SQL statement under your cursor by pressing Shift + F5. For example, if you have a query containing two statements and your cursor is placed on the second statement: When you press Shift + F5, only the second statement is executed:   Insert semicolons You can now use SQL Prompt to automatically insert missing semicolons after each statement in a query. To insert semicolons, go to the SQL Prompt menu and click Insert Semicolons. Alternatively, hold Ctrl and press B then C. BEGIN…END block highlighting When you place your cursor over a BEGIN or END keyword, SQL Prompt now automatically highlights the matching keyword: Rename variables and aliases You can now use SQL Prompt to rename all occurrences of a variable or alias in a query. To rename a variable or alias, place your cursor over an instance of the variable or alias you want to rename and press F2: Improved loading dialog box The database loading dialog box now shows actual progress, and you can cancel loading databases:   Single suggestion improvement SQL Prompt no longer suggests keywords if the keyword has been typed and no other suggestions exist. Performance improvement SQL Prompt now has less impact on Management Studio start up time. What do you think? We want to hear your feedback about the beta. If you have any suggestions, or bugs to report, tell us on the SQL Prompt forum or our suggestions forum.

    Read the article

  • How can I open a console application with a given window size?

    - by Oliver Salzburg
    The application I want to start is MongoDB. If I would start it normally, it looks like this: I don't like the amount of line breaks and I have a lot of screen space, so I would like to utilize said space to get rid of the line breaks. I can change the size of the console window with MODE, so I wrote a batch file like this: @ECHO OFF MODE con:cols=140 lines=70 %~dp0mongodb\bin\mongod --dbpath %~dp0data --rest So far, so good. When I start this batch file, I get a larger window, as desired. But when I now press Ctrl+C to exit MongoDB, I get the annoying prompt: Terminate batch job (Y/N)? Which is useless, because the command I just exited out of was the last command in the batch job anyway and no matter what I answer, the result is the same. So, how can I get a larger console window for the application without having that prompt when I hit Ctrl+C?

    Read the article

  • CNAME to another domain fails on some office networks, why?

    - by crashalpha
    Our domain "aspenfasteners.com" is hosted by Volusion. We have CNAME records "find" and "search" which point to site indexing accounts on www.picosearch.com. These addresses fail on SOME private office networks which have their own DNS. We suspect the problem comes from Volusion's own name servers, n2.volusion.com and n3.volusion.com. Volusion support on problems this technical is non-existant. We have tried an NSLOOKUP on find.aspenfasteners.com with level 2 debugging info, and we got the results below. Is it possible that the local DNS is recursing to Volusion's name servers, and that while Volusion DOES return the canonical name, they do NOT resolve the address? Can anybody with expertise in this sort of stuff PLEASE look at the NSLOOKUP below and tell me if we are right, because Volusion is giving me absolutely NO support on this topic. I need proof of where the problem lies. Thanks VERY much! Carlo find.aspenfasteners.com Server: mtl-srm-dbsv-01.fastenerwholesale.com Address: 192.168.0.44 SendRequest(), len 61 HEADER: opcode = QUERY, id = 8, rcode = NOERROR header flags: query, want recursion questions = 1, answers = 0, authority records = 0, additional = 0 QUESTIONS: find.aspenfasteners.com.fastenerwholesale.com, type = A, class = IN ------------ Got answer (138 bytes): HEADER: opcode = QUERY, id = 8, rcode = NXDOMAIN header flags: response, auth. answer, want recursion, recursion avail. questions = 1, answers = 0, authority records = 1, additional = 0 QUESTIONS: find.aspenfasteners.com.fastenerwholesale.com, type = A, class = IN AUTHORITY RECORDS: -> fastenerwholesale.com type = SOA, class = IN, dlen = 46 ttl = 3600 (1 hour) primary name server = mtl-srm-dbsv-01.fastenerwholesale.com responsible mail addr = admin.fastenerwholesale.com serial = 10219 refresh = 900 (15 mins) retry = 600 (10 mins) expire = 86400 (1 day) default TTL = 3600 (1 hour) ------------ SendRequest(), len 41 HEADER: opcode = QUERY, id = 9, rcode = NOERROR header flags: query, want recursion questions = 1, answers = 0, authority records = 0, additional = 0 QUESTIONS: find.aspenfasteners.com, type = A, class = IN ------------ Got answer (141 bytes): HEADER: opcode = QUERY, id = 9, rcode = NXDOMAIN header flags: response, auth. answer questions = 1, answers = 1, authority records = 1, additional = 1 QUESTIONS: find.aspenfasteners.com, type = A, class = IN ANSWERS: -> find.aspenfasteners.com type = CNAME, class = IN, dlen = 17 canonical name = www.picosearch.com ttl = 3600 (1 hour) AUTHORITY RECORDS: -> com type = SOA, class = IN, dlen = 43 ttl = 900 (15 mins) primary name server = ns3.volusion.com responsible mail addr = admin.volusion.com serial = 1 refresh = 900 (15 mins) retry = 600 (10 mins) expire = 86400 (1 day) default TTL = 3600 (1 hour) ADDITIONAL RECORDS: -> ns3.volusion.com type = A, class = IN, dlen = 4 internet address = 65.61.137.154 ttl = 900 (15 mins) * mtl-srm-dbsv-01.fastenerwholesale.com can't find find.aspenfasteners.com: Non-existent domain

    Read the article

< Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >