Search Results

Search found 41025 results on 1641 pages for 'in memory database'.

Page 240/1641 | < Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >

  • Project Server 2010 site creation - the database connection string is not available

    - by Brandon Montgomery
    I am trying to get Project Server 2010 up and running on a Win Server 2008 box. I've got Sharepoint 2010 installed and Project Server 2010 installed. I open SharePoint Central Administration, then I go to Manage Service Applications Project Server Service Application. It looks like there is a site under the "Sharepoint - 80" section, but the Status says "Failed - see the Application event Log". When I click on the site and select "Retry" I get the same thing. In the event viewer, I see an error with SharePoint Foundation Search as the Source - it reads: Could not create a database session. Context: Application '276504a6-93b1-4c1f-a900-fd6ed9d5c117' Details: The database connection string is not available. (0xc0041228) How can I fix this? Is there some configuration I missed?

    Read the article

  • 3 Root accounts in MySQl database

    - by hairbymaurice
    Hello, I have managed to get mySQL running under Ubuntu 8.10, I am now diligently trying to secure the database and am adding passwords for the root users. My question: I have a root user under the host "kickseed" with no password set I have no idea what kickseed is as the database is installed under localhost, on searching around i have discovered that this is something to do with the ubuntu OS itself. Is it safe to delete this user account from MySQL or is it used for something by the OS? If i need to keep it should i /can i protect it with a password? Also i have another root account under the host IP 127.0.0.1 again can i delete this? My absolute preference would be to have only one account with root access but i do not want to delete these accounts if they are necessary. Thanks for tolerating a newbie Regards Hairby

    Read the article

  • Recovering database files from a corrupted VHD

    - by Apocalypse9
    We have a SQL server hosted on a virtual machine. Our hosting company updated/restarted the server and for some reason the virtual machines became unbootable. We've spoken to Microsoft and used a few higher level tools to attempt to recover the virtual machines but were unsuccessful. In browsing the file system the database folder doesn't even appear. I'm wondering if there are any lower level tools that might be able to find and copy the database files. As far as I know the physical hard drive is ok, so I'm hoping there may be some way to recover the files themselves even if the rest of the virtual machine file-system is a loss. Obviously we're in a bit of a bind, and any help/ suggestions are very much appreciated.

    Read the article

  • e2fsck extremly slow, although enough memory exists

    - by kaefert
    I've got this external USB-Disk: kaefert@blechmobil:~$ lsusb -s 2:3 Bus 002 Device 003: ID 0bc2:3320 Seagate RSS LLC As can be seen in this dmesg output, there are some problems that prevents that disk from beeing mounted: kaefert@blechmobil:~$ dmesg | grep sdb [ 114.474342] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.475089] sd 5:0:0:0: [sdb] Write Protect is off [ 114.475092] sd 5:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 114.475959] sd 5:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 114.477093] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.501649] sdb: sdb1 [ 114.502717] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.504354] sd 5:0:0:0: [sdb] Attached SCSI disk [ 116.804408] EXT4-fs (sdb1): ext4_check_descriptors: Checksum for group 3976 failed (47397!=61519) [ 116.804413] EXT4-fs (sdb1): group descriptors corrupted! So I went and fired up my favorite partition manager - gparted, and told it to verify and repair the partition sdb1. This made gparted call e2fsck (version 1.42.4 (12-Jun-2012)) e2fsck -f -y -v /dev/sdb1 Although gparted called e2fsck with the "-v" option, sadly it doesn't show me the output of my e2fsck process (bugreport https://bugzilla.gnome.org/show_bug.cgi?id=467925 ) I started this whole thing on Sunday (2012-11-04_2200) evening, so about 48 hours ago, this is what htop says about it now (2012-11-06-1900): PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 3704 root 39 19 1560M 1166M 768 R 98.0 19.5 42h56:43 e2fsck -f -y -v /dev/sdb1 Now I found a few posts on the internet that discuss e2fsck running slow, for example: http://gparted-forum.surf4.info/viewtopic.php?id=13613 where they write that its a good idea to see if the disk is just that slow because maybe its damaged, and I think these outputs tell me that this is not the case in my case: kaefert@blechmobil:~$ sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 3562 MB in 2.00 seconds = 1783.29 MB/sec Timing buffered disk reads: 82 MB in 3.01 seconds = 27.26 MB/sec kaefert@blechmobil:~$ sudo hdparm /dev/sdb /dev/sdb: multcount = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 364801/255/63, sectors = 5860533160, start = 0 However, although I can read quickly from that disk, this disk speed doesn't seem to be used by e2fsck, considering tools like gkrellm or iotop or this: kaefert@blechmobil:~$ iostat -x Linux 3.2.0-2-amd64 (blechmobil) 2012-11-06 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 14,24 47,81 14,63 0,95 0,00 22,37 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0,59 8,29 2,42 5,14 43,17 160,17 53,75 0,30 39,80 8,72 54,42 3,95 2,99 sdb 137,54 5,48 9,23 0,20 587,07 22,73 129,35 0,07 7,70 7,51 16,18 2,17 2,04 Now I researched a little bit on how to find out what e2fsck is doing with all that processor time, and I found the tool strace, which gives me this: kaefert@blechmobil:~$ sudo strace -p3704 lseek(4, 41026998272, SEEK_SET) = 41026998272 write(4, "\212\354K[_\361\3nl\212\245\352\255jR\303\354\312Yv\334p\253r\217\265\3567\325\257\3766"..., 4096) = 4096 lseek(4, 48404766720, SEEK_SET) = 48404766720 read(4, "\7t\260\366\346\337\304\210\33\267j\35\377'\31f\372\252\ffU\317.y\211\360\36\240c\30`\34"..., 4096) = 4096 lseek(4, 41027002368, SEEK_SET) = 41027002368 write(4, "\232]7Ws\321\352\t\1@[+5\263\334\276{\343zZx\352\21\316`1\271[\202\350R`"..., 4096) = 4096 lseek(4, 48404770816, SEEK_SET) = 48404770816 read(4, "\17\362r\230\327\25\346//\210H\v\311\3237\323K\304\306\361a\223\311\324\272?\213\tq \370\24"..., 4096) = 4096 lseek(4, 41027006464, SEEK_SET) = 41027006464 write(4, "\367yy>x\216?=\324Z\305\351\376&\25\244\210\271\22\306}\276\237\370(\214\205G\262\360\257#"..., 4096) = 4096 lseek(4, 48404774912, SEEK_SET) = 48404774912 read(4, "\365\25\0\21|T\0\21}3t_\272\373\222k\r\177\303\1\201\261\221$\261B\232\3142\21U\316"..., 4096) = 4096 ^CProcess 3704 detached around 16 of these lines every second, so 4 read and 4 write operations every second, which I don't consider to be a lot.. And finally, my question: Will this process ever finish? If those numbers from fseek (48404774912) represent bytes, that would be something like 45 gigabytes, with this beeing a 3 terrabyte disk, which would give me 134 days to go, if the speed stays constant, and he scans the disk like this completly and only once. Do you have some advice for me? I have most of the data on that disk elsewhere, but I've put a lot of hours into sorting and merging it to this disk, so I would prefer to getting this disk up and running again, without formatting it anew. I don't think that the hardware is damaged since the disk is only a few months and since I can't see any I/O errors in the dmesg output. UPDATE: I just looked at the strace output again (2012-11-06_2300), now it looks like this: lseek(4, 1419860611072, SEEK_SET) = 1419860611072 read(4, "3#\f\2447\335\0\22A\355\374\276j\204'\207|\217V|\23\245[\7VP\251\242\276\207\317:"..., 4096) = 4096 lseek(4, 43018145792, SEEK_SET) = 43018145792 write(4, "]\206\231\342Y\204-2I\362\242\344\6R\205\361\324\177\265\317C\334V\324\260\334\275t=\10F."..., 4096) = 4096 lseek(4, 1419860615168, SEEK_SET) = 1419860615168 read(4, "\262\305\314Y\367\37x\326\245\226\226\320N\333$s\34\204\311\222\7\315\236\336\300TK\337\264\236\211n"..., 4096) = 4096 lseek(4, 43018149888, SEEK_SET) = 43018149888 write(4, "\271\224m\311\224\25!I\376\16;\377\0\223H\25Yd\201Y\342\r\203\271\24eG<\202{\373V"..., 4096) = 4096 lseek(4, 1419860619264, SEEK_SET) = 1419860619264 read(4, ";d\360\177\n\346\253\210\222|\250\352T\335M\33\260\320\261\7g\222P\344H?t\240\20\2548\310"..., 4096) = 4096 lseek(4, 43018153984, SEEK_SET) = 43018153984 write(4, "\360\252j\317\310\251G\227\335{\214`\341\267\31Y\202\360\v\374\307oq\3063\217Z\223\313\36D\211"..., 4096) = 4096 So this number of the lseeks before the reads, like 1419860619264 are already a lot bigger, standing for 1.29 terabytes if the numbers are bytes, so it doesn't seem to be a linear progress on a big scale, maybe there are only some areas that need work, that have big gaps in between them. (times are in CET)

    Read the article

  • Linux: find out what process is using all the RAM?

    - by Timur
    Before actually asking, just to be clear: yes, I know about disk cache, and no, it is not my case :) Sorry, for this preamble :) I'm using CentOS 5. Every application in the system is swapping heavily, and the system is very slow. When I do free -m, here is what I got: total used free shared buffers cached Mem: 3952 3929 22 0 1 18 -/+ buffers/cache: 3909 42 Swap: 16383 46 16337 So, I actually have only 42 Mb to use! As far as I understand, -/+ buffers/cache actually doesn't count the disk cache, so I indeed only have 42 Mb, right? I thought, I might be wrong, so I tried to switch off the disk caching and it had no effect - the picture remained the same. So, I decided to find out who is using all my RAM, and I used top for that. But, apparently, it reports that no process is using my RAM. The only process in my top is MySQL, but it is using 0.1% of RAM and 400Mb of swap. Same picture when I try to run other services or applications - all go in swap, top shows that MEM is not used (0.1% maximum for any process). top - 15:09:00 up 2:09, 2 users, load average: 0.02, 0.16, 0.11 Tasks: 112 total, 1 running, 111 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4046868k total, 4001368k used, 45500k free, 748k buffers Swap: 16777208k total, 68840k used, 16708368k free, 16632k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ SWAP COMMAND 3214 ntp 15 0 23412 5044 3916 S 0.0 0.1 0:00.00 17m ntpd 2319 root 5 -10 12648 4460 3184 S 0.0 0.1 0:00.00 8188 iscsid 2168 root RT 0 22120 3692 2848 S 0.0 0.1 0:00.00 17m multipathd 5113 mysql 18 0 474m 2356 856 S 0.0 0.1 0:00.11 472m mysqld 4106 root 34 19 251m 1944 1360 S 0.0 0.0 0:00.11 249m yum-updatesd 4109 root 15 0 90152 1904 1772 S 0.0 0.0 0:00.18 86m sshd 5175 root 15 0 90156 1896 1772 S 0.0 0.0 0:00.02 86m sshd Restart doesn't help, and, by they way is very slow, which I wouldn't normally expect on this machine (4 cores, 4Gb RAM, RAID1). So, with that - I'm pretty sure that this is not a disk cache, who is using the RAM, because normally it should have been reduced and let other processes to use RAM, rather then go to swap. So, finally, the question is - if someone has any ideas how to find out what process is actually using the memory so heavily?

    Read the article

  • SQLS Timeouts - High Reads in Profiler

    - by lb01
    I've audited a SQLS2008 server with Profiler for one day.. the overhead didn't seem to trouble this new client my company has. They are using a legacy VB6 application as a front-end. They're experiencing timeouts once SQLS RAM usage is high. The server is currently running x64 sqls2008 on a VM with nearly 9 GB of RAM. SQL Server's 'max server memory option' is currently set to 6GB. I've put the results of the trace in a table and queried them using this query. SELECT TextData, ApplicationName, Reads FROM [TraceWednesday] WHERE textdata is not null and EventClass = 12 GROUP BY TextData, ApplicationName, Reads ORDER BY Reads DESC As I expected, some values are very high. Top Reads, in pages. 2504188 1965910 1445636 1252433 1239108 1210153 1088580 1072725 Am I correct in thinking that the top one (2504188 pages) is 20033504 KB, which then is roughly ~20'000 MB, 20GB? These queries are often executed and can take quite some time to run. Eventually RAM is used up because of the cache fattening, and timeouts occur once SQL cannot 'splash' pages in the buffer pool as much. Costs go up. Am I correct in my understanding? I've read that I should tune the associated T-SQL and create appropriate indices. Obviously cutting down the I/O would make SQL Server use less RAM. OR, maybe it might just slow down the process of chewing up the whole RAM. If a lot less pages are read, maybe it'll all run much better even when usage is high? (less time swapping, etc.) Currently, our only option is to restart SQL once a week when RAM usage is high, suddenly the timeouts disappear. SQL breathes again. I'm sure lots of DBAs have been in this situation.. I'm asking before I start digging out all of the bad T-SQL and put indices here and there, is there is something else I can do? Any advice except from what I know (not much yet..) Much appreciated. Leo.

    Read the article

  • WMI Notfication and database mirroring

    - by user22215
    Hi all I'm having a problem configuring a WMI alert that I would like to use with database mirroring. I'm running on Windows 2008 Enterprise X64 with Server 2008 Enterprise X64 also SQL Server has SP1 installed. Basically I click on alert select WMI after that I typed the below SQL statement SELECT * FROM DATABASE_MIRRORING_STATE_CHANGE WHERE DatabaseName = 'testmove' AND State = 8 I have also made sure the service broker is enabled for the msdb and all mirrored databases however I still can't get this to work basically the alert never fires. I'm testing with just the alert functionality I have not even added in the agent job yet. I tested this by right clicking on my mirrored database and forcing it to fail over. Any help with this problem would be much appreciated

    Read the article

  • Some strange things in the db table of mysql database

    - by 0al0
    I noticed some weird things in the db table of mysql database in a client's server, after having the Mysql service stopping for no reason what are the test, and test_% entries? Why are there two entries for the database AQUA? Why is there a entry with a blank name? Should I worry about any of those? What should I do for each specific case? Is it safe to just delete the ones that should not be there, after backing up?

    Read the article

  • SQLS Timeouts - High Reads in Profiler

    - by lb01
    Hi I've audited a SQLS2008 server with Profiler for one day.. the overhead didn't seem to trouble this new client my company has. They are using a legacy VB6 application as a front-end. They're experiencing timeouts once SQLS RAM usage is high. The server is currently running x64 sqls2008 on a VM with nearly 9 GB of RAM. SQL Server's 'max server memory option' is currently set to 6GB. I've put the results of the trace in a table and queried them using this query. SELECT TextData, ApplicationName, Reads FROM [TraceWednesday] WHERE textdata is not null and EventClass = 12 GROUP BY TextData, ApplicationName, Reads ORDER BY Reads DESC As I expected, some values are very high. Top Reads, in pages. 2504188 1965910 1445636 1252433 1239108 1210153 1088580 1072725 Am I correct in thinking that the top one (2504188 pages) is 20033504 KB, which then is roughly ~20'000 MB, 20GB? These queries are often executed and can take quite some time to run. Eventually RAM is used up because of the cache fattening, and timeouts occur once SQL cannot 'splash' pages in the buffer pool as much. Costs go up. Am I correct in my understanding? I've read that I should tune the associated T-SQL and create appropriate indices. Obviously cutting down the I/O would make SQL Server use less RAM. OR, maybe it might just slow down the process of chewing up the whole RAM. If a lot less pages are read, maybe it'll all run much better even when usage is high? (less time swapping, etc.) Currently, our only option is to restart SQL once a week when RAM usage is high, suddenly the timeouts disappear. SQL breathes again. I'm sure lots of DBAs have been in this situation.. I'm asking before I start digging out all of the bad T-SQL and put indices here and there, is there is something else I can do? Any advice except from what I know (not much yet..) Much appreciated. Leo.

    Read the article

  • SQL Server database constantly restarting

    - by Michael Itzoe
    We have SQL Server 2008 Express installed on a Windows 2003 server. Looking at the event log, one of the databases appears to be restarting anywhere from every couple seconds to every 15 to 30 minutes. This server hosts about half a dozen databases; the problem is with only one. This database is also the onle one comprised of multiple schemas (not just dbo). There are thousands of events going back several months. There doesn't seem to be any affect on the website using the database, nor does any data appear to be corrupted or compromised. I'm not a DBA, so I don't even know where to look for causes to this. Any suggestions?

    Read the article

  • How to connect to MySQL server in LAN

    - by waelbk
    Ok, Here is the technical description. My laptop's config: Ip Adress:192.168.2.5 Mysqlserver 5.0 on port : 3306 Operating system: Ubuntu the database is in this machine My friend's laptop config: Ip Adress:192.168.2.4 Mysqlserver 5.0 on port : 3306 Operating system: Windows XP Both are on a wireless LAN connected through a belkin router (192.168.2.1) I put this but its not working: url = "jdbc:mysql://192.168.2.5:3306/Database" so how configure to connect to this database?

    Read the article

  • unlock database files when SQL server is idle

    - by Andy
    In my development/test environment on my laptop, I can't back up the SQL server database files because the file handles are kept permanently open by SQL server (VSS doesn't work because the drive is truecrypted) I was hoping there may be some setting in SQL server that can make it unlock the data files after a certain period of inactivity and automatically open them again on demand, but I can't find anything. I don't really want to be dumping the database out every night because it's only a development environment. apart from stopping sql server before I do the backup is there any other solution?

    Read the article

  • Code-First Database Creation During TFS 2010 CI Build

    - by jedimindtrickster
    I would like to automate code-first database generation during the automated CI build of a web project in Team Foundation Server 2010. When run locally the tests create a code-first database specified by the connection string in the app.config of the tests project. How do I configure the TFS Build Configuration to mimic this behaviour on the TFS build server? Edit The problem, it turns out, was that the TFS build server was successfully running the test which was using the default connection string in the app.config which pointed to the local SQL Server, not where I expected it. The solution was to use SlowCheetah on the TFS server as a means to transform the App.config file using the QA transform as per this blog article.

    Read the article

  • Cleaning cruft from the stored configs database

    - by Zoredache
    I have setup stored configuration primarily as a method to manage my ssh known_hosts. Unfortunately as I retire hosts the old configs still exist in my database. The answer seems to be run the command puppet node clean <hostname>. The problem is that while this does command does run, and does clean up some data, it doesn't seem to clean up everything. For example I can still find values in the puppet_tags table that only applied to a hosts that no longer exists. What should I be doing to keep my stored configuration database clean of all extra junk that seems to be building up? P.S. Can anyone point me any documentation for the stored configuration schema?  If I could find good documentation, or at least an entity-relationship-diagram, I would be tempted to just do some manual clean-up.

    Read the article

  • I Cannot connect to remote MySQL database using SSH tunnel

    - by Scott
    Brand new server, brand new MySQL 5.5 install on Ubuntu 12.04. I can log in to the database as root from the command line. I can log on via Navicat MySQL or Sequel Pro as root on port 3306 from my Mac. I cannot log in using an SSH tunnel to the server and then to the database as root. I have tried both localhost and 127.0.0.1 as server for the local connection part. My password is fine. root is currently defined at %, 127.0.0.1, and localhost. I have set up this same type of connection at least 30 times before and never had a problem. The SSH connection gets made with no problem, and then it just hangs trying to connect to the DB and finally times out. The only thing I changed in my.cnf was to comment out the bind-address = 127.0.0.1 line. Any help? Any Ideas?

    Read the article

  • how to change database timezone on vps

    - by michael
    I am running my domain on a vps and I have virtualmin and webmin access. In my php files, I need to record the current time by using mysql NOW() when a row is inserted. I changed the timezone from the php configuration on webmin, but the database function NOW() is still using the default timezone. How can I change the database timezone? PS: I run mysql command to change timezone on webmin, but it gave me the error: Failed to execute SQL : SQL SET time_zone = 'America/New_York'; failed : Unknown or incorrect time zone: 'America/New_York'

    Read the article

  • Migrating Split Access Database from one domain to another (not working, details in Q)

    - by Expo_Rob
    Some background: I'm a programmer, not a network administrator, who has been asked to migrate some accounting software (Integrated Office Accounting version 3.2) from an existing domain (OLD_NETWORK) to a new domain (NEW_NETWORK). No-body at the office knows how it works under the hood. It is a split Access 2000 database with the back-end shared and on a file server (which is also the DC) using mapped drives. The DC is NT Server 4 SP 6. The new server is server 2003. The two networks are running independently (ie: two computers on each desk). I have been able to get new computers set up on NEW_NETWORK and working with the IOA software just perfectly but for one problem: The company here uses other entirely separate databases which access the tables IOA maintains (specifically the 'customers' table) via links. To switch between these systems, you press F11 then File-Open the appropriate database and away you go (this is necessary to maintain the permissions that the IOA system uses to protect the customers table). The entire database is Access 2000, the links go to other Access databases, SQL-Server is not involved in any way, nor is a migration to SQL server likely. If I can't migrate anything over, everything will stay as it is, and the NEW_NETWORK computers will not be used. The problem: When I try and update these seperate databases (I shall call one "BANK_ACCOUNT", but the name does not matter), it says "this recordset cannot be updated". It also will sometimes not pull information out of the 'customers' table (ie: date_entered) when looking at a report of everyone who opened a bank account on a certain day (ie: today). I have tried: Giving 'everyone' full control via. shared directory permissions Giving 'everyone' full control on a file system level Checking the permissions within Access (everyone has full read/write on all tables) Copying the entire server contents from one file server to another (ie: xcopy everything) Copying the entire local client files from one computer to another, putting them in the exact same position in the file system, with the same permissons (or full control to 'everyone'). Running as an Administrator Taking one of the NEW_NETWORK computers, having it join OLD_NETWORK and run the software (direct copy from a working system with identical drive mappings), this did not work Weeping openly My Question: Is there anything else I can try? (sorry for this being so long)

    Read the article

  • In Ruby, how to I read memory values from an external process?

    - by grg-n-sox
    So all I simply want to do is make a Ruby program that reads some values from known memory address in another process's virtual memory. Through my research and basic knowledge of hex editing a running process's x86 assembly in memory, I have found the base address and offsets for the values in memory I want. I do not want to change them; I just want to read them. I asked a developer of a memory editor how to approach this abstract of language and assuming a Windows platform. He told me the Win32API calls for OpenProcess, CreateProcess, ReadProcessMemory, and WriteProcessMemory were the way to go using either C or C++. I think that the way to go would be just using the Win32API class and mapping two instances of it; One for either OpenProcess or CreateProcess, depending on if the user already has th process running or not, and another instance will be mapped to ReadProcessMemory. I probably still need to find the function for getting the list of running processes so I know which running process is the one I want if it is running already. This would take some work to put all together, but I am figuring it wouldn't be too bad to code up. It is just a new area of programming for me since I have never worked this low level from a high level language (well, higher level than C anyways). I am just wondering of the ways to approach this. I could just use a bunch or Win32API calls, but that means having to deal with a bunch of string and array pack and unpacking that is system dependant I want to eventually make this work cross-platform since the process I am reading from is produced from an executable that has multiple platform builds, (I know the memory address changes from system to system. The idea is to have a flat file that contains all memory mappings so the Ruby program can just match the current platform environment to the matching memory mapping.) but from the looks of things I'll just have to make a class that wraps whatever is the current platform's system shared library memory related function calls. For all I know, there could already exist a Ruby gem that takes care of all of this for me that I am just not finding. I could also possibly try editing the executables for each build to make it so whenever the memory values I want to read from are written to by the process, it also writes a copy of the new value to a space in shared memory that I somehow have Ruby make an instance of a class that is a pointer under the hood to that shared memory address and somehow signal to the Ruby program that the value was updated and should be reloaded. Basically a interrupt based system would be nice, but since the purpose of reading these values is just to send to a scoreboard broadcasted from a central server, I could just stick to a polling based system that sends updates at fixed time intervals. I also could just abandon Ruby altogether and go for C or C++ but I do not know those nearly as well. I actually know more x86 than C++ and I only know C as far as system independent ANSI C and have never dealt with shared system libraries before. So is there a gem or lesser known module available that has already done this? If not, then any additional information as to how to accomplish this would be nice. I guess, long story short, how do I do all this? Thanks in advance, Grg PS: Also a confirmation that those Win32API calls should be aimed at the kernel32.dll library would be nice.

    Read the article

  • .NET 3.5SP1 64-bit memory model vs. 32-bit memory model

    - by James Dunne
    As I understand it, the .NET memory model on a 32-bit machine guarantees 32-bit word writes and reads to be atomic operations but does not provide this guarantee on 64-bit words. I have written a quick tool to demonstrate this effect on a Windows XP 32-bit OS and am getting results consistent with that memory model description. However, I have taken this same tool's executable and run it on a Windows 7 Enterprise 64-bit OS and am getting wildly different results. Both the machines are identical specs just with different OSes installed. I would have expected that the .NET memory model would guarantee writes and reads to BOTH 32-bit and 64-bit words to be atomic on a 64-bit OS. I find results completely contrary to BOTH assumptions. 32-bit reads and writes are not demonstrated to be atomic on this OS. Can someone explain to me why this fails on a 64-bit OS? Tool code: using System; using System.Threading; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { var th = new Thread(new ThreadStart(RunThread)); var th2 = new Thread(new ThreadStart(RunThread)); int lastRecordedInt = 0; long lastRecordedLong = 0L; th.Start(); th2.Start(); while (!done) { int newIntValue = intValue; long newLongValue = longValue; if (lastRecordedInt > newIntValue) Console.WriteLine("BING(int)! {0} > {1}, {2}", lastRecordedInt, newIntValue, (lastRecordedInt - newIntValue)); if (lastRecordedLong > newLongValue) Console.WriteLine("BING(long)! {0} > {1}, {2}", lastRecordedLong, newLongValue, (lastRecordedLong - newLongValue)); lastRecordedInt = newIntValue; lastRecordedLong = newLongValue; } th.Join(); th2.Join(); Console.WriteLine("{0} =? {2}, {1} =? {3}", intValue, longValue, Int32.MaxValue / 2, (long)Int32.MaxValue + (Int32.MaxValue / 2)); } private static long longValue = Int32.MaxValue; private static int intValue; private static bool done = false; static void RunThread() { for (int i = 0; i < Int32.MaxValue / 4; ++i) { ++longValue; ++intValue; } done = true; } } } Results on Windows XP 32-bit: Windows XP 32-bit Intel Core2 Duo P8700 @ 2.53GHz BING(long)! 2161093208 > 2161092246, 962 BING(long)! 2162448397 > 2161273312, 1175085 BING(long)! 2270110050 > 2270109040, 1010 BING(long)! 2270115061 > 2270110059, 5002 BING(long)! 2558052223 > 2557528157, 524066 BING(long)! 2571660540 > 2571659563, 977 BING(long)! 2646433569 > 2646432557, 1012 BING(long)! 2660841714 > 2660840732, 982 BING(long)! 2661795522 > 2660841715, 953807 BING(long)! 2712855281 > 2712854239, 1042 BING(long)! 2737627472 > 2735210929, 2416543 1025780885 =? 1073741823, 3168207035 =? 3221225470 Notice how BING(int) is never written and demonstrates that 32-bit reads/writes are atomic on this 32-bit OS. Results on Windows 7 Enterprise 64-bit: Windows 7 Enterprise 64-bit Intel Core2 Duo P8700 @ 2.53GHz BING(long)! 2208482159 > 2208121217, 360942 BING(int)! 280292777 > 279704627, 588150 BING(int)! 308158865 > 308131694, 27171 BING(long)! 2549116628 > 2548884894, 231734 BING(int)! 534815527 > 534708027, 107500 BING(int)! 545113548 > 544270063, 843485 BING(long)! 2710030799 > 2709941968, 88831 BING(int)! 668662394 > 667539649, 1122745 1006355562 =? 1073741823, 3154727581 =? 3221225470 Notice that BING(long) AND BING(int) are both displayed! Why are the 32-bit operations failing, let alone the 64-bit ones?

    Read the article

  • SQL SERVER – Concat Strings in SQL Server using T-SQL – SQL in Sixty Seconds #035 – Video

    - by pinaldave
    Concatenating  string is one of the most common tasks in SQL Server and every developer has to come across it. We have to concat the string when we have to see the display full name of the person by first name and last name. In this video we will see various methods to concatenate the strings. SQL Server 2012 has introduced new function CONCAT which concatenates the strings much efficiently. When we concat values with ‘+’ in SQL Server we have to make sure that values are in string format. However, when we attempt to concat integer we have to convert the integers to a string or else it will throw an error. However, with the newly introduce the function of CONCAT in SQL Server 2012 we do not have to worry about this kind of issue. It concatenates strings and integers without casting or converting them. You can specify various values as a parameter to CONCAT functions and it concatenates them together. Let us see how to concat the values in Sixty Seconds: Here is the script which is used in the video. -- Method 1: Concatenating two strings SELECT 'FirstName' + ' ' + 'LastName' AS FullName -- Method 2: Concatenating two Numbers SELECT CAST(1 AS VARCHAR(10)) + ' ' + CAST(2 AS VARCHAR(10)) -- Method 3: Concatenating values of table columns SELECT FirstName + ' ' + LastName AS FullName FROM AdventureWorks2012.Person.Person -- Method 4: SQL Server 2012 CONCAT function SELECT CONCAT('FirstName' , ' ' , 'LastName') AS FullName -- Method 5: SQL Server 2012 CONCAT function SELECT CONCAT('FirstName' , ' ' , 1) AS FullName Related Tips in SQL in Sixty Seconds: SQL SERVER – Concat Function in SQL Server – SQL Concatenation String Function – CONCAT() – A Quick Introduction 2012 Functions – FORMAT() and CONCAT() – An Interesting Usage A Quick Trick about SQL Server 2012 CONCAT Function – PRINT A Quick Trick about SQL Server 2012 CONCAT function What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • SQL SERVER – Standard Reports from SQL Server Management Studio – SQL in Sixty Seconds #016 – Video

    - by pinaldave
    SQL Server management Studio 2012 is wonderful tool and has many different features. Many times, an average user does not use them as they are not aware about these features. Today, we will learn one such feature. SSMS comes with many inbuilt performance and activity reports, but we do not use it to the full potential. Connect to SQL Server Node >> Right Click on it >> Go to Reports >> Click on Standard Reports >> Pick Any Report. Please note that some of the reports can be IO intensive and not suggested to run during business hours! More on Standard Reports: SQL SERVER – Out of the Box – Activity and Performance Reports from SSSMS SQL SERVER – Generate Report for Index Physical Statistics – SSMS SQL SERVER – Configure Management Data Collection in Quick Steps I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • SQL SERVER – Color Coding SQL Server Management Studio Status Bar – SQL in Sixty Seconds #023 – Video

    - by pinaldave
    I often see developers executing the unplanned code on production server when they actually want to execute on the development server. Developers and DBAs get confused because when they use SQL Server Management Studio (SSMS) they forget to pay attention to the server they are connecting. It is very easy to fix this problem. You can select different color for a different server. Once you have different color for different server in the status bar, it will be easier for developer easily notice the server against which they are about to execute the script. Personally when I work on SQL Server development, here is the color code, which I follow. I keep Green for my development server, blue for my staging server and red for my production server. Honestly color coding does not signify much but different color for different server is the key here. More Tips on SSMS in SQL in Sixty Seconds: Generate Script for Schema and Data in SQL Server – SQL in Sixty Seconds #021  Remove Debug Button in SQL Server Management Studio – SQL in Sixty Seconds #020  Three Tricks to Comment T-SQL in SQL Server Management Studio – SQL in Sixty Seconds #019  Importing CSV into SQL Server – SQL in Sixty Seconds #018   Tricks to Replace SELECT * with Column Names – SQL in Sixty Seconds #017 I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • SQL SERVER – Tricks to Comment T-SQL in SSMS – SQL in Sixty Seconds #019 – Video

    - by pinaldave
    Code commeting is the one of the most common tasks developers perform. There are two major reasons why developer comment code. 1) During Debug 2) Documenting the code. While debugging the T-SQL code I have often seen developers struggling to comment code.  They spend (or waste) more time in commenting and uncommenting  than doing actual debugging of the procedure.  When I see developer struggling to comment the code I feel little uncomfortable as commenting should be a very easy task over. Today we will see three quick method to comment T-SQL code in Query Editor. There are three different method to comment and uncomment statements in SQL Server Management Studio Using Keyboard Shortcuts Using Tool Bar Using Menu Bar Method 1: Using Keyboard Shortcuts Commenting the statement – CTRL+K, CTRL+C Commenting the statement – CTRL+K, CTRL+U Method 2: Using Tool Bar Using Tool bar buttons. (See Video) Method 3: Using Menu Bar Commenting the statement – Menu Bar >> Edit >> Advanced >> Click on Comment Selection. Unommenting the statement – Menu Bar >> Edit >> Advanced >> Click on Uncomment Selection. More on Importing CSV Data: Two Different Ways to Comment Code – Explanation and Example I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

< Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >