Search Results

Search found 43168 results on 1727 pages for 'sql log'.

Page 196/1727 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • What the heck is a OPTIONS method in a IIS 7.5 web Log?

    - by Knox
    I know what a GET and a POST are, but it's almost impossible to Google for the word OPTIONS. Here's what I see (i deleted all the stuff at the end) of each: 11/23/11 0:02:13 10.100.0.14 GET /CUpdate2.cshtml _=1322006533495 11/23/11 0:02:13 10.200.0.10 OPTIONS /AssignmentCount _=1322006576798 11/23/11 0:02:13 10.200.0.10 GET /media/faxSound.wav - 11/23/11 0:02:13 10.200.0.10 GET /Star/StarUpdates _=1322006578729 11/23/11 0:02:13 10.100.0.10 GET /CUpdate2.cshtml _=1322006533268

    Read the article

  • puma init.d for centos 6 fails with runuser: user /var/log/puma.log does not exist

    - by Rubytastic
    Trying to get a init.d/puma to work on Centos 6. It throws error runuser: user /var/log/puma.log does not exist I run this from the /srv/books/current folder but it fails. I tried to debug the values but not quite get what is missing and why it throws this error. #! /bin/sh # puma - this script starts and stops the puma daemon # # chkconfig: - 85 15 # description: Puma # processname: puma # config: /etc/puma.conf # pidfile: /srv/books/current/tmp/pids/puma.pid # Author: Darío Javier Cravero &lt;[email protected]> # # Do NOT "set -e" # Original script https://github.com/puma/puma/blob/master/tools/jungle/puma # It was modified here by Stanislaw Pankevich <[email protected]> # to run on CentOS 5.5 boxes. # Script works perfectly on CentOS 5: script uses its native daemon(). # Puma is being stopped/restarted by sending signals, control app is not used. # Source function library. . /etc/rc.d/init.d/functions # Source networking configuration. . /etc/sysconfig/network # Check that networking is up. [ "$NETWORKING" = "no" ] && exit 0 # PATH should only include /usr/* if it runs after the mountnfs.sh script PATH=/usr/local/bin:/usr/local/sbin/:/sbin:/usr/sbin:/bin:/usr/bin DESC="Puma rack web server" NAME=puma DAEMON=$NAME SCRIPTNAME=/etc/init.d/$NAME CONFIG=/etc/puma.conf JUNGLE=`cat $CONFIG` RUNPUMA=/usr/local/bin/run-puma # Skipping the following non-CentOS string # Load the VERBOSE setting and other rcS variables # . /lib/init/vars.sh # CentOS does not have these functions natively log_daemon_msg() { echo "$@"; } log_end_msg() { [ $1 -eq 0 ] && RES=OK; logger ${RES:=FAIL}; } # Define LSB log_* functions. # Depend on lsb-base (>= 3.0-6) to ensure that this file is present. . /lib/lsb/init-functions # # Function that performs a clean up of puma.* files # cleanup() { echo "Cleaning up puma temporary files..." echo $1; PIDFILE=$1/tmp/puma/puma.pid STATEFILE=$1/tmp/puma/puma.state SOCKFILE=$1/tmp/puma/puma.sock rm -f $PIDFILE $STATEFILE $SOCKFILE } # # Function that starts the jungle # do_start() { log_daemon_msg "=> Running the jungle..." for i in $JUNGLE; do dir=`echo $i | cut -d , -f 1` user=`echo $i | cut -d , -f 2` config_file=`echo $i | cut -d , -f 3` if [ "$config_file" = "" ]; then config_file="$dir/puma/config.rb" fi log_file=`echo $i | cut -d , -f 4` if [ "$log_file" = "" ]; then log_file="$dir/puma/puma.log" fi do_start_one $dir $user $config_file $log_file done } do_start_one() { PIDFILE=$1/puma/puma.pid if [ -e $PIDFILE ]; then PID=`cat $PIDFILE` # If the puma isn't running, run it, otherwise restart it. if [ "`ps -A -o pid= | grep -c $PID`" -eq 0 ]; then do_start_one_do $1 $2 $3 $4 else do_restart_one $1 fi else do_start_one_do $1 $2 $3 $4 fi } do_start_one_do() { log_daemon_msg "--> Woke up puma $1" log_daemon_msg "user $2" log_daemon_msg "log to $4" cleanup $1; daemon --user $2 $RUNPUMA $1 $3 $4 } # # Function that stops the jungle # do_stop() { log_daemon_msg "=> Putting all the beasts to bed..." for i in $JUNGLE; do dir=`echo $i | cut -d , -f 1` do_stop_one $dir done } # # Function that stops the daemon/service # do_stop_one() { log_daemon_msg "--> Stopping $1" PIDFILE=$1/tmp/puma/puma.pid STATEFILE=$1/tmp/puma/puma.state echo $PIDFILE if [ -e $PIDFILE ]; then PID=`cat $PIDFILE` echo "Pid:" echo $PID if [ "`ps -A -o pid= | grep -c $PID`" -eq 0 ]; then log_daemon_msg "---> Puma $1 isn't running." else log_daemon_msg "---> About to kill PID `cat $PIDFILE`" # pumactl --state $STATEFILE stop # Many daemons don't delete their pidfiles when they exit. kill -9 $PID fi cleanup $1 else log_daemon_msg "---> No puma here..." fi return 0 } # # Function that restarts the jungle # do_restart() { for i in $JUNGLE; do dir=`echo $i | cut -d , -f 1` do_restart_one $dir done } # # Function that sends a SIGUSR2 to the daemon/service # do_restart_one() { PIDFILE=$1/tmp/puma/puma.pid i=`grep $1 $CONFIG` dir=`echo $i | cut -d , -f 1` if [ -e $PIDFILE ]; then log_daemon_msg "--> About to restart puma $1" # pumactl --state $dir/tmp/puma/state restart kill -s USR2 `cat $PIDFILE` # TODO Check if process exist else log_daemon_msg "--> Your puma was never playing... Let's get it out there first" user=`echo $i | cut -d , -f 2` config_file=`echo $i | cut -d , -f 3` if [ "$config_file" = "" ]; then config_file="$dir/config/puma.rb" fi log_file=`echo $i | cut -d , -f 4` if [ "$log_file" = "" ]; then log_file="$dir/log/puma.log" fi do_start_one $dir $user $config_file $log_file fi return 0 } # # Function that statuss then jungle # do_status() { for i in $JUNGLE; do dir=`echo $i | cut -d , -f 1` do_status_one $dir done } # # Function that sends a SIGUSR2 to the daemon/service # do_status_one() { PIDFILE=$1/tmp/puma/pid i=`grep $1 $CONFIG` dir=`echo $i | cut -d , -f 1` if [ -e $PIDFILE ]; then log_daemon_msg "--> About to status puma $1" pumactl --state $dir/tmp/puma/state stats # kill -s USR2 `cat $PIDFILE` # TODO Check if process exist else log_daemon_msg "--> $1 isn't there :(..." fi return 0 } do_add() { str="" # App's directory if [ -d "$1" ]; then if [ "`grep -c "^$1" $CONFIG`" -eq 0 ]; then str=$1 else echo "The app is already being managed. Remove it if you want to update its config." exit 1 fi else echo "The directory $1 doesn't exist." exit 1 fi # User to run it as if [ "`grep -c "^$2:" /etc/passwd`" -eq 0 ]; then echo "The user $2 doesn't exist." exit 1 else str="$str,$2" fi # Config file if [ "$3" != "" ]; then if [ -e $3 ]; then str="$str,$3" else echo "The config file $3 doesn't exist." exit 1 fi fi # Log file if [ "$4" != "" ]; then str="$str,$4" fi # Add it to the jungle echo $str >> $CONFIG log_daemon_msg "Added a Puma to the jungle: $str. You still have to start it though." } do_remove() { if [ "`grep -c "^$1" $CONFIG`" -eq 0 ]; then echo "There's no app $1 to remove." else # Stop it first. do_stop_one $1 # Remove it from the config. sed -i "\\:^$1:d" $CONFIG log_daemon_msg "Removed a Puma from the jungle: $1." fi } case "$1" in start) [ "$VERBOSE" != no ] && log_daemon_msg "Starting $DESC" "$NAME" if [ "$#" -eq 1 ]; then do_start else i=`grep $2 $CONFIG` dir=`echo $i | cut -d , -f 1` user=`echo $i | cut -d , -f 2` config_file=`echo $i | cut -d , -f 3` if [ "$config_file" = "" ]; then config_file="$dir/config/puma.rb" fi log_file=`echo $i | cut -d , -f 4` if [ "$log_file" = "" ]; then log_file="$dir/log/puma.log" fi do_start_one $dir $user $config_file $log_file fi case "$?" in 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; esac ;; stop) [ "$VERBOSE" != no ] && log_daemon_msg "Stopping $DESC" "$NAME" if [ "$#" -eq 1 ]; then do_stop else i=`grep $2 $CONFIG` dir=`echo $i | cut -d , -f 1` do_stop_one $dir fi case "$?" in 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; esac ;; status) # TODO Implement. log_daemon_msg "Status $DESC" "$NAME" if [ "$#" -eq 1 ]; then do_status else i=`grep $2 $CONFIG` dir=`echo $i | cut -d , -f 1` do_status_one $dir fi case "$?" in 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; esac ;; restart) log_daemon_msg "Restarting $DESC" "$NAME" if [ "$#" -eq 1 ]; then do_restart else i=`grep $2 $CONFIG` dir=`echo $i | cut -d , -f 1` do_restart_one $dir fi case "$?" in 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; esac ;; add) if [ "$#" -lt 3 ]; then echo "Please, specifiy the app's directory and the user that will run it at least." echo " Usage: $SCRIPTNAME add /path/to/app user /path/to/app/config/puma.rb /path/to/app/config/log/puma.log" echo " config and log are optionals." exit 1 else do_add $2 $3 $4 $5 fi case "$?" in 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; esac ;; remove) if [ "$#" -lt 2 ]; then echo "Please, specifiy the app's directory to remove." exit 1 else do_remove $2 fi case "$?" in 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; esac ;; *) echo "Usage:" >&2 echo " Run the jungle: $SCRIPTNAME {start|stop|status|restart}" >&2 echo " Add a Puma: $SCRIPTNAME add /path/to/app user /path/to/app/config/puma.rb /path/to/app/config/log/puma.log" echo " config and log are optionals." echo " Remove a Puma: $SCRIPTNAME remove /path/to/app" echo " On a Puma: $SCRIPTNAME {start|stop|status|restart} PUMA-NAME" >&2 exit 3 ;; esac :

    Read the article

  • How can I get a data usage/access log for an external hard-drive?

    - by Vittorio Vittori
    Hello, I'm working in an office with many people and sometimes I leave my external hard drive with my personal data inside. I would to know if there is some way to see if my hard disk was used during my absence. I'm not the computer administrator, so I can't use exclusive file permissions and I would really like to know hard disk is opened from another computer. I am using a Mac. Does exist some other way to protect personal data on usb device like an hard-drive? If yes can you write some link to possible guides? I hope there is some ploy!!

    Read the article

  • VMware vCetner tomcat log files missing

    - by sttaq
    I have upgraded to a trial version of vCenter 5.1 from 5.0. Previous versions used to have tomcat log files inside the tomcat\logs folder. But since I have update I have noticed that tomcat is not writing any new log files. The log files were of the following format: vctomcat-stderr.XXXX-XX-XX.log vctomcat-stdout.XXXX-XX-XX.log I would like to know if there is a way by which we can configure tomcat (which is shipped with vCenter) to produces these log files? Also, VMware seems to have removed that Configure Tomcat application that they used to ship with the older version. Any reasons for this?

    Read the article

  • How can I determine what is causing SQL Server 2008 Express to hang Windows 7?

    - by thelaughingdm
    I have SQL Server Express 2008 installed on a Windows 7 (32-bit) developer workstation. Whenever I run an application that accesses SQL Server the Windows 7 shell hangs when the application closes. Applications like Windows Explorer and Task Manager become completely unresponsive. The task bar will not allow any interaction. The only way to recover the system is to power cycle. Two of the applications in use when this happens are NUnit and SQL Server Management Studio. NUnit always runs the unit tests that interact with the database fine. SQL Server Management Studio will sometimes cause the problem while trying to explore the database. The Windows event log does not show any events that are obviously connected to the problem. I have reverted and reinstalled SQL Server Express 2008 several times. What can be done to identify what is causing SQL Server Express 2008 to hang the Windows 7 shell?

    Read the article

  • How to add linked Oracle server to SQL 2008 Express?

    - by David.Chu.ca
    I have tried to download Oracle Client 11g both 32 & 64 packages to Windows 2008 R2 with SQL server 2008 Express. However, I still cannot see Oracle provider in SQL server by using sa log in. Not sure if is it possible to do it for SQL Server express 2008? Any advice to do it? I followed installations from this article: Making Linked Server Connection Between SQL Server 64 Bit & Oracle 32 Bit | MS SQL World After installation and reboot the Windows, I still cannot see the Oracle provider in linked provider in SQL server.

    Read the article

  • Disable log rotation for apache or move file location

    - by vittocia
    I need to change the log retention for apache, currently is seems to be running on the default from logrotate.conf which is weekly. It creates 'access_log.1' 'access_log.2' and so on for each week. The problem is it deletes the last log file every week, 'access_log.5', I need the logs to keep going infinitely instead of the last log being deleted every week. It seems to be running on the default value from logrotate.conf - I don't want to change the default values held in that file, so I assume there is a way to change the retention using the /etc/logrotate.d/httpd file? the contents are as follows: /var/log/httpd/*log { missingok notifempty sharedscripts postrotate /sbin/service httpd reload > /dev/null 2>/dev/null || true endscript } what can I add/change to stop the last log being deleted every week?

    Read the article

  • Database Mirroring on SQL Server Express Edition

    - by Most Valuable Yak (Rob Volk)
    Like most SQL Server users I'm rather frustrated by Microsoft's insistence on making the really cool features only available in Enterprise Edition.  And it really doesn't help that they changed the licensing for SQL 2012 to be core-based, so now it's like 4 times as expensive!  It almost makes you want to go with Oracle.  That, and a desire to have Larry Ellison do things to your orifices. And since they've introduced Availability Groups, and marked database mirroring as deprecated, you'd think they'd make make mirroring available in all editions.  Alas…they don't…officially anyway.  Thanks to my constant poking around in places I'm not "supposed" to, I've discovered the low-level code that implements database mirroring, and found that it's available in all editions! It turns out that the query processor in all SQL Server editions prepends a simple check before every edition-specific DDL statement: IF CAST(SERVERPROPERTY('Edition') as nvarchar(max)) NOT LIKE '%e%e%e% Edition%' print 'Lame' else print 'Cool' If that statement returns true, it fails. (the print statements are just placeholders)  Go ahead and test it on Standard, Workgroup, and Express editions compared to an Enterprise or Developer edition instance (which support everything). Once again thanks to Argenis Fernandez (b | t) and his awesome sessions on using Sysinternals, I was able to watch the exact process SQL Server performs when setting up a mirror.  Surprisingly, it's not actually implemented in SQL Server!  Some of it is, but that's something of a smokescreen, the real meat of it is simple filesystem primitives. The NTFS filesystem supports links, both hard links and symbolic, so that you can create two entries for the same file in different directories and/or different names.  You can create them using the MKLINK command in a command prompt: mklink /D D:\SkyDrive\Data D:\Data mklink /D D:\SkyDrive\Log D:\Log This creates a symbolic link from my data and log folders to my Skydrive folder.  Any file saved in either location will instantly appear in the other.  And since my Skydrive will be automatically synchronized with the cloud, any changes I make will be copied instantly (depending on my internet bandwidth of course). So what does this have to do with database mirroring?  Well, it seems that the mirroring endpoint that you have to create between mirror and principal servers is really nothing more than a Skydrive link.  Although it doesn't actually use Skydrive, it performs the same function.  So in effect, the following statement: ALTER DATABASE Mir SET PARTNER='TCP://MyOtherServer.domain.com:5022' Is turned into: mklink /D "D:\Data" "\\MyOtherServer.domain.com\5022$" The 5022$ "port" is actually a hidden system directory on the principal and mirror servers. I haven't quite figured out how the log files are included in this, or why you have to SET PARTNER on both principal and mirror servers, except maybe that mklink has to do something special when linking across servers.  I couldn't get the above statement to work correctly, but found that doing mklink to a local Skydrive folder gave me similar functionality. To wrap this up, all you have to do is the following: Install Skydrive on both SQL Servers (principal and mirror) and set the local Skydrive folder (D:\SkyDrive in these examples) On the principal server, run mklink /D on the data and log folders to point to SkyDrive: mklink /D D:\SkyDrive\Data D:\Data On the mirror server, run the complementary linking: mklink /D D:\Data D:\SkyDrive\Data Create your database and make sure the files map to the principal data and log folders (D:\Data and D:\Log) Viola! Your databases are kept in sync on multiple servers! One wrinkle you will encounter is that the mirror server will show the data and log files, but you won't be able to attach them to the mirror SQL instance while they are attached to the principal. I think this is a bug in the Skydrive, but as it turns out that's fine: you can't access a mirror while it's hosted on the principal either.  So you don't quite get automatic failover, but you can attach the files to the mirror if the principal goes offline.  It's also not exactly synchronous, but it's better than nothing, and easier than either replication or log shipping with a lot less latency. I will end this with the obvious "not supported by Microsoft" and "Don't do this in production without an updated resume" spiel that you should by now assume with every one of my blog posts, especially considering the date.

    Read the article

  • SQL Server IO handling mechanism can be severely affected by high CPU usage

    - by sqlworkshops
    Are you using SSD or SAN / NAS based storage solution and sporadically observe SQL Server experiencing high IO wait times or from time to time your DAS / HDD becomes very slow according to SQL Server statistics? Read on… I need your help to up vote my connect item – https://connect.microsoft.com/SQLServer/feedback/details/744650/sql-server-io-handling-mechanism-can-be-severely-affected-by-high-cpu-usage. Instead of taking few seconds, queries could take minutes/hours to complete when CPU is busy.In SQL Server when a query / request needs to read data that is not in data cache or when the request has to write to disk, like transaction log records, the request / task will queue up the IO operation and wait for it to complete (task in suspended state, this wait time is the resource wait time). When the IO operation is complete, the task will be queued to run on the CPU. If the CPU is busy executing other tasks, this task will wait (task in runnable state) until other tasks in the queue either complete or get suspended due to waits or exhaust their quantum of 4ms (this is the signal wait time, which along with resource wait time will increase the overall wait time). When the CPU becomes free, the task will finally be run on the CPU (task in running state).The signal wait time can be up to 4ms per runnable task, this is by design. So if a CPU has 5 runnable tasks in the queue, then this query after the resource becomes available might wait up to a maximum of 5 X 4ms = 20ms in the runnable state (normally less as other tasks might not use the full quantum).In case the CPU usage is high, let’s say many CPU intensive queries are running on the instance, there is a possibility that the IO operations that are completed at the Hardware and Operating System level are not yet processed by SQL Server, keeping the task in the resource wait state for longer than necessary. In case of an SSD, the IO operation might even complete in less than a millisecond, but it might take SQL Server 100s of milliseconds, for instance, to process the completed IO operation. For example, let’s say you have a user inserting 500 rows in individual transactions. When the transaction log is on an SSD or battery backed up controller that has write cache enabled, all of these inserts will complete in 100 to 200ms. With a CPU intensive parallel query executing across all CPU cores, the same inserts might take minutes to complete. WRITELOG wait time will be very high in this case (both under sys.dm_io_virtual_file_stats and sys.dm_os_wait_stats). In addition you will notice a large number of WAITELOG waits since log records are written by LOG WRITER and hence very high signal_wait_time_ms leading to more query delays. However, Performance Monitor Counter, PhysicalDisk, Avg. Disk sec/Write will report very low latency times.Such delayed IO handling also occurs to read operations with artificially very high PAGEIOLATCH_SH wait time (with number of PAGEIOLATCH_SH waits remaining the same). This problem will manifest more and more as customers start using SSD based storage for SQL Server, since they drive the CPU usage to the limits with faster IOs. We have a few workarounds for specific scenarios, but we think Microsoft should resolve this issue at the product level. We have a connect item open – https://connect.microsoft.com/SQLServer/feedback/details/744650/sql-server-io-handling-mechanism-can-be-severely-affected-by-high-cpu-usage - (with example scripts) to reproduce this behavior, please up vote the item so the issue will be addressed by the SQL Server product team soon.Thanks for your help and best regards,Ramesh MeyyappanHome: www.sqlworkshops.comLinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • Instalar SQL Server 2008

    - by Jason Ulloa
    En este post trataré de explicar los pasos para la instalación de SQL y su posterior configuración. Primer paso: Instalación de las reglas de Soporte (Setup Support Rules) Está será la primer pantalla de instalación con la que nos toparemos cuando tratemos de instalar sql server. En ella, únicamente debemos dar clic en siguiente(next). Paso 2: Selección de las características de instalación de SQL Server (Feature Selection) Este es a mi parecer el paso mas importante del proceso de instalación de SQL, pues es el que nos permitirá seleccionar todos los componentes que este tendrá posteriormente Acá lo importante es: Servicios de bases de datos y herramientas de administración. Todas las demás son plus del motor.   Paso 3: Configuración de la Instancia En este paso, no debemos preocuparnos por nada. Únicamente presionamos siguiente. Paso 4: Requerimientos de espacio en disco Nuevamente en esta instancia no tendremos trabajo alguno. Únicamente es una pantalla informativa de SQL en donde se muestra el espacio actual del disco y el espacio que la instalación de SQL Server consumirá. Presionamos siguiente (next). Paso 5: Configuración del servidor Este paso es uno de los mas importantes, pues en el le indicaremos a SQL que usuario utilizará para autenticarse y levantar cada uno de los servicios que hayamos seleccionado al inicio. Generalmente cuando se trabaja en local el usuario NT AUTHORITY\SYSTEM es la mejor opción. Si en este paso, seleccionamos un usuario con permisos insuficientes SQL nos dará un error. Presionamos siguiente (next) Paso 6: Configuración del motor de bases de datos En este paso, nos enfocaremos en la pestaña Account Provisioning, que será en la que le indiquemos el usuario con el que el motor de bases de datos funcionará por defecto. Lo mas recomendado sería hacer clic en la opción add current user, la cual agregará el usuario de windows  que se encuentre en ese momento. También, podremos seleccionar si queremos el modo de autenticación de SQL o el modo Mixto, que incluye autenticación de SQL Server y Windows. Para nuestra instalación seleccionaremos unicamente modo de autenticación de SQL. Una vez que agregamos el usuario presionamos siguiente (next) Paso 7:  Finalizar la configuración Luego de los pasos anteriores, las demás pantallas no requieren nada especial. Únicamente presionar siguiente y esperar a que la instalación de SQL termine.

    Read the article

  • Tellago announces SQL Server 2008 R2 BI quick adoption programs

    - by Vishal
    During the last year, we (Tellago) have been involved in various business intelligence initiatives that leverage some emerging BI techniques such as self-service BI or complex event processing (CEP). Specifically, in the last few months, we have partnered with Microsoft to deliver a series of events across the country where we present the different technologies of the SQL Server 2008 R2 BI stack such as PowerPivot, StreamInsight, Ad-Hoc Reporting and Master Data Services. As part of those events, we try to go beyond the traditional technology presentation and provide a series of best practices and lessons we have learned on real world BI projects that leverage these technologies. Now that SQL Server 2008 R2 has been released to manufacturing, we have launched a series of quick adoption programs that are designed to help customers understand how they can embrace the newest additions to Microsoft's BI stack as part of their IT initiatives. The programs are also designed to help customers understand how the new SQL Server features interact with established technologies such as SQL Server Analysis Services or SQL Server Integration Services. We try to keep these adoption programs very practical by doing a lot of prototyping and design sessions that will give our customers a practical glimpse of the capabilities of the technologies and how they can fit in their enterprise architecture roadmap. Here is our official announcement (you can blame my business partner, BI enthusiast, and Tellago's CEO Elizabeth Redding for the marketing pitch ;)): Tellago Marks Microsoft's SQL Server 2008 R2 Launch With Business Intelligence Quick Adoption Program Microsoft launched SQL Server 2008 R2 last week, which delivers several breakthrough business intelligence (BI) capabilities that enable organizations to:  Efficiently process, analyze and mine data Improve IT and developer efficiency Enable highly scalable and well-managed Business Intelligence on a self-service basis for business users The release offers a new feature called PowerPivot, which enables self service BI through connecting business users directly to enterprise data sources and providing improved reporting and analytics. The release also offers Master Data Management which helps enterprises centrally manage critical data assets company-wide and across diverse systems, enabling increased integrity of information over time. Finally, the release includes StreamInsight, which is a framework for implementing Complex Event Processing (CEP) applications on the Microsoft platform. With StreamInsight, IT organizations can implement the infrastructure to process a large volume of events near real time, execute continuous queries against event streams and enable real time business intelligence. As a thought leader in the Business Intelligence community, Tellago has recognized the occasion by launching a series of quick adoption programs to enable the adoption of this new BI technology stack in your enterprise. Our Quick Adoption programs are designed to help you: Brainstorm BI solution options  Architect initial infrastructure components Prototype key features of a solution As a 2-3 day program, our approach is more efficient and cost effective than a traditional Proof of Concept because it allows you to understand the new SQL Server 2008 R2 feature set  while seeing directly how you can leverage it for your business intelligence needs. If you are interested in learning more about the BI capabilities of Microsoft's Business Intelligence stack, including SQL Server 2008 R2, we can help.  As industry experts and software content advisers to Microsoft, Tellago is the place where ideas meet technology expertise.  Let us help you see for yourself the advantages that you can gain from Microsoft's  SQL Server 2008 R2. Email or call for more information - [email protected] or 847-925-2399.

    Read the article

  • SQL Injection - some sense at last!

    - by TATWORTH
    I see various articles that proclaim means to guard against SQL injection. As individual steps they are of use but since they were often proclaimed as "the solution" they were potentially misleading. At http://www.simple-talk.com/sql/learn-sql-server/sql-injection-defense-in-depth/ there is an article entitled "SQL Injection: Defense in Depth" - this article argues what I have argued myself. Remember that however low-grade the information on your web site is, if your site is hacked, the public may percive the hacking as your most sensitive information was exposed.

    Read the article

  • SQL SERVER List All the DMV and DMF on Server

    “How many DMVs and DVFs are there in SQL Server 2008?” – this question was asked to me in one of the recent SQL Server Trainings. Answer is very simple: SELECT name, type, type_desc FROM sys.system_objects WHERE name LIKE 'dm_%' ORDERBY name Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Sql Server Express Profiler

    - by csharp-source.net
    Sql Server Express Profiler is a profiler for MS SQL Server 2005 Express . SQL Server Express Edition Profiler provides the most of functionality standard profiler does, such as choosing events to profile, setting filters, etc. But it doesn't provide professional tools for profiling sql queries. This project is a .NET WinForms Application and in future AJAX-enabled web site which provides functionality of Microsoft SQL Profiler.

    Read the article

  • PL/SQL to delete invalid data from token Strings

    - by Jie Chen
    Previous article describes how to delete the duplicated values from token string in bulk mode. This one extends it and shows the way to delete invalid data. Scenario Support we have page_two and manufacturers tables in database and the table DDL is: SQL> desc page_two; Name NULL? TYPE ----------------------------------------- -------- ------------------------ MULTILIST04 VARCHAR2(765) SQL> SQL> desc manufacturers; Name NULL? TYPE ----------------------------------------- -------- ------ ID NOT NULL NUMBER NAME VARCHAR In table page_two, column multilist04 stores a token string splitted with common. Each token represent a valid ID in manufacturers table. My expectation is to delete invalid token strings from page_two.multilist04, which have no mapping id in manufacturers.id. For example in below SQL result: ,6295728,33,6295729,6295730,6295731,22, , value 33 and 22 are invalid data because there is no ID equals to 33 or 22 in manufacturers table. So I need to delete 33 and 22. SQL> col rowid format a20; SQL> col multilist04 format a50; SQL> select rowid, multilist04 from page_two; ROWID MULTILIST04 -------------------- -------------------------------------------------- AAB+UrADfAAAAhUAAI ,6295728,6295729,6295730,6295731, AAB+UrADfAAAAhUAAJ ,1111,6295728,6295729,6295730,6295731, AAB+UrADfAAAAhUAAK ,6295728,111,6295729,6295730,6295731, AAB+UrADfAAAAhUAAL ,6295728,6295729,6295730,6295731,22, AAB+UrADfAAAAhUAAM ,6295728,33,6295729,6295730,6295731,22, SQL> select id, encode_name from manufacturers where id in (1111,11,22,33); No rows selected SQL> Solution As there is no existing SPLIT function or related in PL/SQL, I should program it by myself. I code Split intermediate function which is used to get the token value between current splitter and next splitter. Next program is main entry point, it get each column value from page_two.multilist04, process each row based on cursor. When it get each multilist04 value, it uses above Split function to get each token string stored to singValue variant, then check if it exists in manufacturers.id. If not found, set fixFlag to 1, pending to be deleted.

    Read the article

  • Online SQL course [closed]

    - by Sualeh Fatehi
    Does anyone know of a free online SQL source that allows you to practice SQL online without installing a database? Sort of like Code Academy? I am looking to start teaching SQL to a remote audience, and I want to be able to set up a schema and some data, and have the students run SQL against my schema, and practice. I also want a way to set up some exercises for them. In short, a Code Academy kind of environment for SQL.

    Read the article

  • Unable to install SQL 2008 on Windows 7

    - by Axel
    SQL 2008 install hangs on Windows 7 The story: Trying to install SQL2008 on Windows 7 hangs on SqlEngineDBStartconfigAction_install_configrc_Cpu32. What I Tried: Uninstall hangs on validation Manual uninstall using msiinv.exe and msiexec /x works Added SQL service accounts to local admins no help Turn of UAC no help Last lines in setup log: 2010-04-01 16:18:05 SQLEngine: : Checking Engine checkpoint 'GetSqlServerProcessHandle' 2010-04-01 16:18:05 SQLEngine: --SqlServerServiceSCM: Waiting for nt event 'Global\sqlserverRecComplete' to be created 2010-04-01 16:18:07 SQLEngine: --SqlServerServiceSCM: Waiting for nt event 'Global\sqlserverRecComplete' or sql process handle to be signaled 2010-04-01 16:18:07 SQLEngine: : Checking Engine checkpoint 'WaitSqlServerStartEvents' 2010-04-01 16:18:53 Slp: Sco: Attempting to initialize script 2010-04-01 16:18:53 Slp: Sco: Attempting to initialize default connection string 2010-04-01 16:18:53 Slp: Sco: Attempting to set script connection protocol to NotSpecified 2010-04-01 16:18:53 Slp: Sco: Attempting to set script connection protocol to NamedPipes 2010-04-01 16:18:53 SQLEngine: --SqlDatabaseServiceConfig: Connection String: Data Source=\.\pipe\SQLLocal\MSSQLSERVER;Initial Catalog=master;Integrated Security=True;Pooling=False;Network Library=dbnmpntw;Application Name=SqlSetup 2010-04-01 16:18:53 SQLEngine: : Checking Engine checkpoint 'ServiceConfigConnect' 2010-04-01 16:18:53 SQLEngine: --SqlDatabaseServiceConfig: Connecting to SQL.... 2010-04-01 16:18:53 Slp: Sco: Attempting to connect script 2010-04-01 16:18:53 Slp: Connection string: Data Source=\.\pipe\SQLLocal\MSSQLSERVER;Initial Catalog=master;Integrated Security=True;Pooling=False;Network Library=dbnmpntw;Application Name=SqlSetup And now comes the fun part: When I open conf mgr I can see the service running, I enabled named pipes and TCP/IP, restarted the service I'm able to connect to the server using an OLE DB connection but not with the Native Client. And what I find suspicious is the following error in my app log: .NET Runtime Optimization Service (clr_optimization_v2.0.50727_32) - Failed to compile: C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\Tools\VDT\DataProjects.dll . Error code = 0x8007000b In MS connect this is reported as a bug but MS is unable to reproduce the problem altough when you search the fora I'm not the only one with this problem. So any help is appreciated.

    Read the article

  • sqlbulkcopy using sql CE

    - by harrisonmeister
    Is it possible to use SqlBulkcopy with Sql Compact Edition e.g. (*.sdf) files? I know it works with SQL Server 200 Up, but wanted to check CE compatibility. If it doesnt does anyone else know the fastest way of getting a CSV type file into SQL Server CE without using DataSets (puke here)?

    Read the article

  • Using LINQ to SQL in ASP.NET MVC2 project

    - by mazhar
    Well I am new to this ORM stuff. We have to create a large project. I read about LINQ to SQL. will it be appropriate to use it in the project of high risk. i found no problem with it personally but the thing is that there will be no going back once started.So i need some feedback from the ORM gurus here at the MSDN. Will entity framework will be better? (I am in doubt about LINK to SQL because I have read and heard negative feedback here and there) I will be using MVC2 as the framework. So please give the feedback about LINQ to SQL in this regard. Q2) Also I am a fan of stored procedure as they are precomputed and fasten up the thing and I have never worked without them.I know that LINQ to SQL support stored procedures but will it be feasible to give up stored procedure seeing the beautiful data access layer generated with little effort as we are also in a need of rapid development. Q3) If some changes to some fields required in the database in LINK to SQL how will the changes be accommodated in the data access layer.

    Read the article

  • SQL 2008 R2 login/network issue

    - by martinjd
    I have a Windows Server 2008 R2 new clean install , not a VM, that I have added to a Windows Server 2003 based domain using my account which has domain admin rights. The domain functional level is 2003. I performed a clean install of SQL Server 2008 R2 using my account which has domain admin rights. The installation completed without any errors. I logged into SSMS locally and attempted to add another domain account by clicking Search, Advanced and finding the user in the domain. When I return to the "Dialog - New" window and click OK I receive the following error: Create failed for Login 'Domain\User'. (Microsoft.SqlServer.Smo) An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) Windows NT user or group 'Domain\User' not found. Check the name again. (Microsoft SQL Server, Error: 15401) I have verified that the firewall is off, tried adding a different domain user, tried using SA to add a user, installed the hotfix for KB 976494 and verified that the Local Security Policy for Domain Member: Digitally encrypt or sign secure channel Domain Member: Digitally encrypt secure channel Domain Member: Digitally sign secure channel are disabled none of which have made a difference. I can RDP to a Server 2003 server running SQL 2008 and add the same domain user without issue. Also if I try to connect with SSMS to the sql server from another system on the domain using my account I get the following error: Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. (Microsoft SQL Server, Error: 18452) and on the database server I see the following in the security event log: An account failed to log on. Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: myUserName Account Domain: MYDOMAIN Failure Information: Failure Reason: An Error occured during Logon. Status: 0xc000018d Sub Status: 0x0 Process Information: Caller Process ID: 0x0 Caller Process Name: - Network Information: Workstation Name: MYWKS Source Network Address: - Source Port: - Detailed Authentication Information: Logon Process: NtLmSsp Authentication Package: NTLM Transited Services: - Package Name (NTLM only): - Key Length: 0 I am sure that the "NULL SID" has some significant meaning but have no idea at this point what the issue could be.

    Read the article

  • Database migrations for SQL Server

    - by Art
    I need a database migration framework for SQL Server, capable of managing both schema changes and data migrations. I guess I am looking for something similar to django's South framework here. Given the fact that South is tightly coupled with django's ORM, and the fact that there's so many ORMs for SQL Server I guess having just a generic migration framework, enabling you to write and execute in controlled and sequential manner SQL data/schema change scripts should be sufficient.

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >