Search Results

Search found 22982 results on 920 pages for 'users'.

Page 215/920 | < Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >

  • Single Msi File - Many RemoteApps

    - by Mikeon
    How can I create a single .MSI installer file for many Remote Apps in Remote Desktop Services? Suppose I have 10 applications exposed via RDS. To make life easier I created MSI installer packages so users can "install" those apps. currently I have 10 different .msi files which forces users to install 10 times. Is it possible to make all/some apps into a single .msi file? (I don't control user machines so installing via GPO or other magic is out of question).

    Read the article

  • Puppet - how can i copy a file to several user folders?

    - by Eliot Rocha
    Well i was using the info on this: Puppet - Any way to copy predefined custom configuration files for software on clients from the puppet master (host)? But i need some more elaborated, because i have several Desktops and are in use by 2 or 3 users each one, so i want to make a class for copy a shortcut in his desktops. The computers are joined to a domain, so any user can log in any desktop, and his profile is created in every desktop. I've tryed with this: class applink { file { "/home/installer/Escritorio/Workdesktop.desktop": owner => installer, group => root, mode => 770, source => "puppet://$server/files/Workdesktop.desktop" } This is only for one user called "installer", how can do this for several users? Can i use $USER for do this? Any Thoughts? Thank You!

    Read the article

  • Windows Installer using usb drive for temp purposes

    - by Douglas Anderson
    When installing apps that are built around Windows Installer, it would appear that it often uses my external usb hard disk (when it's connected) as the temp location while it expands and installs the application (creates a folder off the root with a guid name). Is there anyway to change this so it always defaults to a specific drive? This appears to be the case on Windows Vista and 7, not sure about previous releases. EDIT: Current environment variables look like this: TEMP=C:\Users\<me>\AppData\Local\Temp TMP=C:\Users\<me>\AppData\Local\Temp EDIT: I have a funny suspicion that it's using the drive with the largest available free space.

    Read the article

  • Exchange Transport Service Started but not working

    - by Philippe
    Good day, Here is the problem : We are hosting a Microsoft Exchange Server. Everything working fine until recently, where the mail transport seems to go wrong. We almost have to restart the service every morning. The thing is that the transport service is started, but the mail are not delivered to the users and senders to our server get a delayed delivery notification. When we restart the service, all the mail is then delivered to the users and we're good to go for a day or two. Things I've noticed : The store service is growing to around 6 Gb of used RAM, and the w3wp.exe service is hanging around 700mb RAM. Is there a way to schedule a restart of the transport role every 4 hours or something while I'm solving the issue so I don't have to worry when I leave for the week-end? And most of all...does anyone have any idea on how to solve this issue? Thanks, Philippe

    Read the article

  • Oracle VM Templates Available for E-Business Suite 12.1.3

    - by Steven Chan (Oracle Development)
    Oracle VM has matured into a formidable virtualization product over the years. Oracle E-Business Suite is certified to run production instances on both Oracle VM 2 and 3. This applies to EBS Releases 11i and 12.  It also applies to future Oracle VM 3 updates, including subsequent Oracle VM 3.x releases. E-Business Suite 12.1.3 Oracle VM templates available now The latest EBS 12.1.3 templates for Oracle VM can be downloaded here: Oracle VM Templates: E-Business Suite Templates are available for: E-Business Suite 12.1.3 Vision (64-bit) E-Business Suite 12.1.3 Production (32-bit) E-Business Suite 12.x Sparse Middle Tiers (32-bit and 64-bit) Should EBS 11i users care? Yes.  You can use these templates to get an EBS 12 testbed environment running in minutes.  This is a great way of giving your end-users a chance to work with EBS 12 without the overhead of building an environment from scratch. References Oracle VM 3 supports a number of guest operating systems including various flavors and versions of Linux, Solaris and Windows. For information regarding certified platforms, installation and upgrade guidance and prerequisite requirements please refer to the Certifications tab on My Oracle Support as well as the following documentation: Oracle VM Installation and Upgrade Guide  Introduction to Oracle VM, Oracle VM Manager and EBS template deployment (Note 1355641.1) Related Articles Oracle VM 3 Certified with Oracle E-Business Suite Support Policies for Virtualization Technologies and Oracle E-Business Suite The Scoop: Oracle E-Business Suite Support on 64-bit Linux

    Read the article

  • cPAddons version conflict with Wordpress

    - by Joel Alejandro
    I have multiple users on my CentOS 5.7 server with WHM/cPanel, who have installed WordPress 3.2.1, and eventually did a manual update to 3.3.1, from Wordpress itself. Now the version of WP for those users doesn't match th one detected by cPanel, and of course, "Upgrade" doesn't work because the directory can't be cleaned (WP is already working there). I've looked on the .cpaddons folder of each user account, and there's a YAML file there, but I'm not sure how can I touch that file for solving this issue. Is there any way to tell cPanel that those WP installations are in fact, 3.3.1?

    Read the article

  • User based BitLocker Drive Encryption

    - by Starx
    While Unlocking a Encrypted Drive.. It is possible for that drive to be unlocked for only the particular user and not all other users who use the system. For example, there are two users... User1 and User2, user1 unlocked a encrypted drive, now he locks the desktop and user2 comes and open the system from his account. Now, user2 also can access the drive which user1 unlocked. User2 must not be able to open the drive. If he has the password the the drive then he might have access but not before that.

    Read the article

  • Clickonce applications being allowed through firewall

    - by DanielF
    The company I work for uses an application that is deployed through clickonce. This means that everytime they update said program the program regenerates in a new location ie: C:\Users\%username%\AppData\Local\Apps\2.0\89XCT43D.10T\WBZ8WE7L.927\trm8..tion_1ad39bb503bcb5df_0008.0005_14829906c9aa8611\%appname%.exe Recently the computers were updated to windows 7 machines and now everytime an update is pushed out the windows firewall see this as a new application and prompts the users to allow the program through the firewall. THis means I have to walk around to every computer and type in admin credentials. Does anyone know of a way to allow the application regardless of the path it is in? Google has not been very helpful so far.

    Read the article

  • After restoring a SQL Server database from another server - get login fails

    - by Renso
    Issue: After you have restored a sql server database from another server, lets say from production to a Q/A environment, you get the "Login Fails" message for your service account. Reason: User logon information is stored in the syslogins table in the master database. By changing servers, or by altering this information by rebuilding or restoring an old version of the master database, the information may be different from when the user database dump was created. If logons do not exist for the users, they will receive an error indicating "Login failed" while attempting to log on to the server. If the user logons do exist, but the SUID values (for 6.x) or SID values (for 7.0) in master..syslogins and the sysusers table in the user database differ, the users may have different permissions than expected in the user database. Solution: Links a user entry in the sys.database_principals system catalog view in the current database to a SQL Server login of the same name. If a login with the same name does not exist, one will be created. Examine the result from the Auto_Fix statement to confirm that the correct link is in fact made. Avoid using Auto_Fix in security-sensitive situations. When you use Auto_Fix, you must specify user and password if the login does not already exist, otherwise you must specify user but password will be ignored. login must be NULL. user must be a valid user in the current database. The login cannot have another user mapped to it. execute the following stored procedure, in this example the login user name is "MyUser" exec sp_change_users_login 'Auto_Fix', 'MyUser'   NOTE: sp_change_users_login cannot be used with a SQL Server login created from a Windows principal or with a user created by using CREATE USER WITHOUT LOGIN.

    Read the article

  • webserver horrible slow, sometimes incredible fast

    - by dhanke
    i am running a small community ( 6000+ Members ) on a non-virtual 64-bit ubuntu 11.04 system. I am not a Linux-pro, not even advanced, i just tried to setup a webserver, which does nothing special actually. Delivering some dynamic PHP and RoR websites is its task. So it might be that my configuration files do look horrible bad. Also, i might use the wrong vocabulary, so in doubt, please ask. Having a current all-time record of 520 registered users (board-accounts, no system-users) online at same time, average server-load is about 2.0 - 5.0. Meantime (~250 users) average server load value is at about 0.4 - 0.8, sometimes, on some expensive searches a bit higher. everything fine. From time to time however, the load increases up to 120 (120.0, not 12.0 ;) ). In this time, its hard to even connect via SSH, but when i reach the server, and use top/htop/iotop to see whats happening, i cannot identify any process causing high CPU load. iotop tells me about a current reading/writing speed of about approx. 70kb/s, which is quite equal to power-off i think. Memory-Usage is max. at ~ 12GB of 16GB, so swap remains empty. now the odd (at least for me:) waiting some minutes ( since i always get a bit into a panic when this happens, it feels like 5 minutes, but i suppose its more like 20-30 minutes) and the server is back to normal. everything continues as normal. another odd fact: when i run hdparm -tT /dev/sda, i get answer like: /dev/sda: Timing cached reads: 7180 MB in 2.00 seconds = 3591.13 MB/sec Timing buffered disk reads: 348 MB in 3.02 seconds = 115.41 MB/sec when i run the same command while the server is "frozen", the answer is like /dev/sda: <- takes about 5 minutes until this line appears Timing cached reads: 7180 MB in 2.00 seconds = 3591.13 MB/sec <- 5 more minutes Timing buffered disk reads: 348 MB in 3.02 seconds = 115.41 MB/sec <- another 5 minutes so the values are the same, but the quoted time is completely wrong. using time command as prefix also tells me that ~ 15 minutes were used. I searched in dmesg, /var/log/[messages|syslog] - nothing found. /var/log/errors however tells me that: Jul 4 20:28:30 localhost kernel: [19080.671415] INFO: task php5-fpm:27728 blocked for more than 120 seconds. Jul 4 20:28:30 localhost kernel: [19080.671419] "echo 0 /proc/sys/kernel/hung_task_timeout_secs" disables this message. multiple times. now that message does tell me that php5-fpm task was blocked or did block ? - but not if that is the cause or just one of the results of that "freeze". Anyone? to cut the long story short, i dont know where even to start analyzing. So if you can give me any advice by looking at following specs and configs, or ask me to provide more information, i`d be glad. Specs: 6 Core AMD Phenom(tm) II X6 1055T Processor * 16 Gigabyte Ram 2x 1.5 TB Seagate ST1500DL003-9VT16L via SATA 3 via SoftwareRaid (i suppose) Services: (due to service --status-all, those with [ + ]) nginx Webserver 1.0.14 mySQL 5.1.63 Server Ruby on Rails 2.3.11 ( passenger-nginx-module ) php5-fpm 5.3.6-13ubuntu3.7 SSH ido2db Further services: default crontab + nightly backup. syslog-ng Website consists of 2 subdomains, forum. and www. where forum is a phpBB3.x PHP-Board, and www a Ruby on Rails 2.3.11 application (portal). Mini-Note: sometimes i notice that the forum is pretty slow, in contrast to the always-fast (except for this "freeze") portal. Both share the same Database, but the portal is using it read-only. The Webserver is nginx, using phusion passenger module to communicate with the ruby-application. Also, for the forum it communicates with php5-fpm via socket: relevant nginx configuration parts ( with comments/questions starting by ; ) ; in case of freeze due to too high Filesystem activity, maybe adding a limit? #worker_rlimit_nofile 50000; user www-data; ; 6 cores, so i read 6 fits. maybe already wrong? worker_processes 6; pid /var/run/nginx.pid; events { worker_connections 1024; } http { passenger_root /var/lib/gems/1.8/gems/passenger-3.0.11; passenger_ruby /usr/bin/ruby1.8; ; the forum once featured a chat, which was working w/o websockets. ; so it was a hell of pull requests (deactivated now, freeze still happening) keepalive_timeout 65; keepalive_requests 50; gzip on; server { listen 80; server_name www.domain.tld; root /var/www/domain/rails/public; passenger_enabled on; } server { listen 80; server_name forum.domain.tld; location / { root /var/www/domain/forum; index index.php; } ; satic stuff to be handled by nginx location ~* ^/style/.+.(jpg|jpeg|gif|css|png|js|ico|xml)$ { access_log off; expires 30d; root /var/www/domain/forum/; } ; now the php magic, note the "backend"-fcgi_pass location ~ .php$ { fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_pass backend; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/domain/forum$fastcgi_script_name; include fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 256 16k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_max_temp_file_size 0; } location ~ /\.ht { deny all; } } ;the php5-fpm socket. i read that /dev/shm/ whould be the fastes place for this. bad idea in general? upstream backend { server unix:/dev/shm/phpfpm; } ... } php5-fpm settings (i changed this values due to php5-fpm error log messages higher and higher.. (freeze-problem was there before as well)* listen = /dev/shm/phpfpm user = www-data group = www-data pm = dynamic ; holy, 4000! well, shinking this value to earth-level gave me ; 100s of 502 bad gateway commands. this values were quite stable. ; since there are only max 520 users online i dont get it, why i would need ; as many children as configured here. due to keep-alive maybe? ; asking questions is easier for me since restarting server will make ; my community-members angry ;) pm.max_children = 4000 pm.start_servers = 100 pm.min_spare_servers = 50 pm.max_spare_servers = 150 pm.max_requests = 10 pm.status_path = /status ping.path = /ping ping.response = pong slowlog = log/$pool.log.slow ;should i use rlimit? ;rlimit_files = 1024 chdir = / mysql/my.cnf [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = 127.0.0.1 key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP ; high number, but less gives some phpBB errors. max_connections = 450 table_cache = 512 ; i read twice the cpu cores, bad? thread_concurrency = 12 join_buffer_size = 2084K concurrent_insert = 3 query_cache_limit = 64M query_cache_size = 512M query_cache_type = 1 log_error = /var/log/mysql/error.log log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 2 expire_logs_days = 10 max_binlog_size = 100M low_priority_updates=1 [mysqldump] quick quote-names max_allowed_packet = 16M [isamchk] key_buffer = 16M !includedir /etc/mysql/conf.d/ I used smartctl already, hdds seem to be fine. /proc/mdstatus quotes: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md3 : active raid1 sda3[1] 1459264192 blocks [2/1] [_U] md1 : active raid1 sda1[0] 3911680 blocks [2/1] [U_] unused devices: ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 127727 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 127727 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited I quote some questions in my configuration files, these are not (intentional) directly problem-related, but would be nice for me to know wether they are indeed questionable or done right. One additional Fact: my MYSQL-database is at 12GB size. i dont know if that does matter, but mytop sometimes shows me 4-5 seconds long insert queries, some are 20-30 seconds long. Its just a feeling that i am unable to prove (because i dont know how), but when i disable the database, the freeze seems not to happen. Example: i created a dummy rails application to see the development log. the app made some sql-queries, reads and inserts. the log quite often was like: DbTest Load (0.3ms) SELECT * FROM `db_test` WHERE (`db_test`.`id` = 31722) LIMIT 1 SQL (0.1ms) BEGIN DbTest Update (0.3ms) UPDATE `db_test` SET `updated_at` = '2012-07-04 23:32:34' WHERE `id` = 31722 - now the log stands still for 5-60 seconds. SQL (49.1ms) COMMIT - SQL-Update time in the log does not include freeze time Rendering test/index Completed in 96ms (View: 16, DB: 59) | 200 OK [http://localhost:9000/test] Bad part is: this mini-freeze here only happens from time to time as well. note: meanwhile i cannot even upload files via scp. I currently feel like running form bad to worse and back by googling for my server-problem due to immense lack of knowledge regarding server configurations. It still makes me wonder, why those problems even appear, since 250 users a time is not such a high amount, right? So my questions: whats wrong and how to fix? ;) or: what information can i provide to make the situation more clear? can you point at some critical bad configuration-line which i should consider to catch up in the documentation? are there any tools i can run to see some possible bottlenecks? any further advice? (next to: "pay someone who knows what he does" - its a private project, server costs enough already. :)) Thanks for your time and help. Best Regards, Daniel P.S.: i renamed the configfiles to domain.tld since i dont want to have any % more load to the server until its fixed. might be a exaggeratedly thought.. P.P.S: if i asked a complete duplicate question, sorry. my search results seemed to be quite specific in their own way.

    Read the article

  • Server crashes when too much memory is allocated

    - by lindenb
    Hi all, my server crashes whenever one of my users is running a 'R' script (this script requires a large amount of memory). Below is the last top I saw: top - 11:32:39 up 20 min, 4 users, load average: 1.08, 0.85, 0.46 Tasks: 336 total, 2 running, 334 sleeping, 0 stopped, 0 zombie Cpu(s): 6.1%us, 0.2%sy, 0.0%ni, 93.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 65939968k total, 5131440k used, 60808528k free, 88256k buffers Swap: 68124664k total, 0k used, 68124664k free, 1077612k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 10392 cdina 25 0 3702m 3.5g 2428 R 100.0 5.6 7:51.82 R 10430 root 15 0 12872 1272 804 R 0.7 0.0 0:02.42 top 1 root 15 0 10348 704 592 S 0.0 0.0 0:02.95 init 2 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/0 is there a way to prevent my server from crashing ("don't run that script" is not an option :-) ) ? something like fixing a 'quota' for the memory allowed ?

    Read the article

  • Static objects and concurrency in a web application

    - by Ionut
    I'm developing small Java Web Applications on Tomcat server and I'm using MySQL as my database. Up until now I was using a connection singleton for accessing the database but I found out that this will ensure just on connection per Application and there will be problems if multiple users want to access the database in the same time. (They all have to make us of that single Connection object). I created a Connection Pool and I hope that this is the correct way of doing things. Furthermore it seems that I developed the bad habit of creating a lot of static object and static methods (mainly because I was under the wrong impression that every static object will be duplicated for every client which accesses my application). Because of this all the Service Classes ( classes used to handle database data) are static and distributed through a ServiceFactory: public class ServiceFactory { private static final String JDBC = "JDBC"; private static String impl; private static AccountService accountService; private static BoardService boardService; public static AccountService getAccountService(){ initConfig(); if (accountService == null){ if (impl.equalsIgnoreCase(JDBC)){ accountService = new JDBCAccountService(); } } return accountService; } public static BoardService getBoardService(){ initConfig(); if (boardService == null){ if (impl.equalsIgnoreCase(JDBC)){ boardService = new JDBCBoardService(); } } return boardService; } private static void initConfig(){ if (StringUtil.isEmpty(impl)){ impl = ConfigUtil.getProperty("service.implementation"); // If the config failed initialize with standard if (StringUtil.isEmpty(impl)){ impl = JDBC; } } } This was the factory class which, as you can see, allows just one Service to exist at any time. Now, is this a bad practice? What happens if let's say 1k users access AccountService simultaneously? I know that all this questions and bad practices come from a bad understanding of the static attribute in a web application and the way the server handles this attributes. Any help on this topic would be more than welcomed. Thank you for your time!

    Read the article

  • Locate rogue DHCP server

    - by Farseeker
    I know this is a serious long shot, but here we go. In the past week or so, for users connected to a particular switch in our network (there are four dumb switches all connected, and it only affects SOME, not all, users on the one switch) are getting DHCP addresses from a rogue DHCP server. I have physically checked every cable plugged into the switch in question to make sure that none of them have a router or wifi point attached to it. I know the IP of the DHCP server, but I cannot ping it, and it does not have a web interface. Does anyone have any suggestions on what I can do to locate it or shut it down? Unfortuantely all the switches are unmanaged, and as mentioned, there's no physical device (that I can find) plugged in to anything. It's getting critical, because it's screwing up the PXE boot of a whole bunch of thin clients.

    Read the article

  • Why does RDP Licensing is licensing the same device multiple times? [closed]

    - by NeerPatel
    Possible Duplicate: Can you help me with my software licensing issue? I've got a Citrix XenApp 6.5 Farm running on Win 2008 R2 Servers. I purchased 300 Device RDP/Remote App Licenses for ~200 users. We went with Device licenses, because most of the end users use the same machines. After 1 month of operating, we started to run out of licenses. It turns out the licensing service is consuming multiple licenses for the same machine. I can revoke licenses, but there is a limit to how many I can do. Is this operating correctly? The only explanation I can come up with is that the Licensing service is giving a license to a device for every server it connects to in our Citrix farm.

    Read the article

  • Using Microsoft's Chart Controls In An ASP.NET Application: Serializing Chart Data

    In most usage scenarios, the data displayed in a Microsoft Chart control comes from some dynamic source, such as from a database query. The appearance of the chart can be modified dynamically, as well; past installments in this article series showed how to programmatically customize the axes, labels, and other appearance-related settings. However, it is possible to statically define the chart's data and appearance strictly through the control's declarative markup. One of the demos examined in the Getting Started article rendered a column chart with seven columns whose labels and values were defined statically in the <asp:Series> tag's <Points> collection. Given this functionality, it should come as no surprise that the Microsoft Chart Controls also support serialization. Serialization is the process of persisting the state of a control or an object to some other medium, such as to disk. Deserialization is the inverse process, and involves taking the persisted data and recreating the control or object. With just a few lines of code you can persist the appearance settings, the data, or both to a file on disk or to any stream. Likewise, it takes just a few lines of codes to reconstitute a chart from the persisted information. This article shows how to use the Microsoft Chart Control's serialization functionality by examining a demo application that allows users to create custom charts, specifying the data to plot and some appearance-related settings. The user can then save a "snapshot" of this chart, which persists its appearance and data to a record in a database. From another page, users can view these saved chart snapshots. Read on to learn more! Read More >

    Read the article

  • Site too large to officially use Google Analytics?

    - by Jeff Atwood
    We just got this email from the Google Analytics team: We love that you love our product and use it as much as you do. We have observed however, that a website you are tracking with Google Analytics is sending over 1 million hits per day to Google Analytics servers. This is well above the "5 million pageviews per month per account" limit specified in the Google Analytics Terms of Service. Processing this amount of data multiple times a day takes up valuable resources that enable us to continue to develop the product for all Google Analytics users. Processing this amount of data multiple times a day takes up valuable resources that enable us to continue to develop the product for all Google Analytics users. As such, starting August 23rd, 2010, the metrics in your reports will be updated once a day, as opposed to multiple times during the course of the day. You will continue to receive all the reports and features in Google Analytics as usual. The only change will be that data for a given day will appear the following day. We trust you understand the reasons for this change. I totally respect this decision, and I think it's very generous to not kick us out. But how do we do this the right way -- what's the official, blessed Google way to use Google Analytics if you're a "whale" website with lots of hits per day? Or, are there other analytics services that would be more appropriate for very large websites?

    Read the article

  • iTunes: unable to authorize and unable to download media

    - by cbrulak
    When I try to authorize my iTunes account on Snow Leopard (10.6) with iTunes 9.0.2 I get this error: "There was an error storing your authorization information on this computer the required file was not found or has a permissions error. Correct..." And if I try to download something from the iTunes store, I get this: iTunes couldn't download your purchase.You don't have write access for your iTunes Media folder or a folder within it...." Edited Permissions: Inside "/Users/cbrulak/Music/iTunes": -rw-r--r--@ 1 cbrulak staff 3211 8 Dec 14:05 iTunes Library -rw-r--r-- 1 cbrulak staff 12288 8 Dec 14:05 iTunes Library Extras.itdb -rw-r--r-- 1 cbrulak staff 32768 8 Dec 13:48 iTunes Library Genius.itdb drwxr-xr-x 4 cbrulak staff 136 8 Dec 13:48 iTunes Media -rw-r--r--@ 1 cbrulak staff 14040 8 Dec 13:49 iTunes Music Library.xml -rw-r--r--@ 1 cbrulak staff 8 8 Dec 14:05 sentinel Inside /Users/cbrulak/Music: drwxr-xr-x 8 cbrulak staff 272 8 Dec 14:05 iTunes Any ideas?

    Read the article

  • MSCC: Purpose and benefits of Version Control Systems (VCS)

    Unfortunately, there was no monthly meetup during May. Which means that it was even more important and interesting to go forward with a great topic for this month. Earlier this year I already spoke to Nayar Joolfoo about doing a presentation on version control systems (VCS), and he gladly agreed since then. It was just about finding the right date for the action. Furthermore, it was also a great coincidence that Avinash Meetoo announced on social media networks that Knowledge 7 is about to have a new training on "Effective git" - which correlates to a book title Avinash is currently working on - all the best with your approach on this and reach out to our MSCC craftsmen for recessions. Once again a big Thank you to Orange Ebene Accelerator on providing the venue for us, and the MSCC members involved on securing the time slot for our event. Unfortunately, it's kind of tough to get an early confirmation for our meetups these days. I'll keep you posted on that one as there are some interesting and exciting options coming up soon. Okay, let's talk about the meeting and version control systems again. As usual, I'm going to put my first impression of the meetup: "Absolutely great topic, questions and discussions on version control systems, like git or VSO. I was also highly pleased by the number of first timers and female IT geeks. Hopefully, we will be able to keep this trend for future get-togethers." And I really have to emphasise the amount of fresh blood coming to our gathering. Also, during the initial phase it was surprising to see that exactly those first-timers, most of them students at various campuses here on the island, had absolutely no idea about version control systems. More about further down... Reactions of other attendees If I counted correctly, we had a total of 17 attendees this month, and I'd like to give you feedback from some of them: "Inspiring. Helped me understand more about GIT." -- Sean on event comments "Joined the meetup today with literally no idea what is a version control system. I have several reasons why I should be starting to use VCS as from NOW in my projects. Thanks Nayar, Jochen and other participants :)" -- Yudish on event comments "Was present today and I'm very satisfied.I was not aware if there was a such tool like git available. Thanks to those who contributed for this meetup.It was great. Learned a lot from this meetup!!" -- Leonardo on event comments "Seriously, I can see how it’s going to ease my task and help me save time. Gone are the issues with files backups.  And since I’ll be doing my dissertation this year, using Git would help me a lot for my backups and I’m grateful to Nayar for the great explanation." -- Swan-Iyah on MSCC meetup : Version Controls Hopefully, I'll be able to get some other sources - personal blogs preferred - on our meeting. Geeks, thank you so much for those encouraging comments. It's really great to experience that we, all members of the MSCC, are doing the right thing to get more IT information out, and to help each other to improve and evolve in our professional careers. Our agenda of the day Honestly, we had a bumpy start... First, I was battling a little bit with the movable room divider in order to maximize the space. I mean, we had 24 RSVPs and usually there might additional people coming along. Then, for what ever reason, we were facing power outages - actually twice in short periods. Not too good for the projector after all, but hey it went smooth for the rest of the time being. And last but not least... our first speaker Nayar got stuck somewhere on the road. ;-) Anyway, not a real show-stopper and we used the time until Nayar's arrival to introduce ourselves a little bit. It is always important for me to get to know the "newbies" a little bit, and as a result we had lots of students of university - first year, second year and recent graduates - among them. Surprisingly, none of them was ever in contact with version control systems at all. I mean, this is a shocking discovery! Similar to the ability of touch-typing I'd say that being able to use (and master) any kind of version control system is compulsory in any job in the IT industry. Seriously, I'm wondering what is being taught during the classes on the campus. All of them have to work on semester assessments or final projects, even in small teams of 2-4 people. That's the perfect occasion to get started with VCS. Already in this phase, we had great input from more experienced VCS users, like Sean, Avinash and myself. git - a modern approach to VCS - Nayar What a tour! Nayar gave us the full round of git from start to finish, even touching some more advanced techniques. First, he started to explain about the importance of version control systems as an essential tool for software developers, even working alone on a project, and the ability to have a kind of "time machine" that allows you to inspect and revert to a previous version of source code at any time. Then he showed how easy it is to install git on an Ubuntu based system but also mentioned that git is literally available for any operating system, like Windows, Mac OS X and of course other Linux distributions. Next, he showed us how to set the initial configuration values of user name and email address which simplifies the daily usage of the git client while working with your repositories. Then he initialised and added a new repository for some local development of a blogging software. All commands were done using the command line interface (CLI) so that they can be repeated on any system as reference. The syntax and the procedure is always the same, and Nayar clearly mentioned this to the attendees. Now, having a git repository in place it was about time to work on some "important" changes on the blogging software - just for the sake of demonstrating the ease of use and power of git. One interesting question came very early: "How many commands do we have to learn? It looks quite difficult at the moment" - Well, rest assured that during daily development circles you will need less than 10 git commands on a regular base: git add, commit, push, pull, checkout, and merge And Nayar demo'd all of them. Much to the delight of everyone he also showed gitk which is the git repository browser. It's an UI tool to display changes in a repository or a selected set of commits. This includes visualizing the commit graph, showing information related to each commit, and the files in the trees of each revision. Using gitk to display and browse information of a local git repository And last but not least, we took advantage of the internet connectivity and reached out to various online portals offering git hosting for free. Nayar showed us how to push the local repository into a remote system on github. Showing the web-based git browser and history handling, and then also explained and demo'd on how to connect to existing online repositories in order to get access to either your own source code or other people's open source projects. Next to github, we also spoke about bitbucket and gitlab as potential online platforms for your projects. Have a look at the conditions and details about their free service packages and what you can get additionally as a paying customer. Usually, you already get a lot of services for up to five users for free but there might be other important aspects that might have an impact on your decision. Anyways, moving git-based repositories between systems is a piece of cake, and changing online platforms is possible at any stage of your development. Visual Studio Online (VSO) - Jochen Well, Nayar literally covered all elements of working with git during his session, including the use of external online platforms. So, what would be the advantage of talking about Visual Studio Online (VSO)? First of all, VSO is "just another" online platform for hosting and managing git repositories on remote systems, equivalent to github, bitbucket, or any other web site. At the moment (of writing), Microsoft also provides a free package of up to five users / developers on a git repository but there is more in that package. Of course, it is related to software development on the Windows systems and the bonds are tightened towards the use of Visual Studio but out of experience you are absolutely not restricted to that. Connecting a Linux or Mac OS X machine with a git client or an integrated development environment (IDE) like Eclipse or Xcode works as smooth as expected. So, why should one opt in for VSO? Well, one of the main aspects that I would like to mention here is that VSO integrates the Application Life Cycle Methodology (ALM) of Microsoft in their platform. Meaning that you get agile project management with Backlogs, Sprints, Burn-down charts as well as the ability to track tasks, bug reports and work items next to collaborative team chats. It's the whole package of agile development you'll get. And, something I mentioned briefly during the begin of our meeting, VSO gives you the possibility of an automated continuous integrated (CI) process which builds and can run tests of your source code after each commit of changes. Having a proper CI strategy is also part of the Clean Code Developer practices - on Level Green actually -, and not only simplifies your life as a software developer but also reduces the sources of potential errors. Seamless integration and automated deployment between Microsoft Azure Web Sites and git repository But my favourite feature is the seamless continuous deployment to Microsoft Azure. Especially, while working on web projects it's absolutely astounishing that as soon as you commit your chances it just takes a couple of seconds until your modifications are deployed and available on your Azure-hosted web sites. Upcoming Events and networking Due to the adjusted times, everybody was kind of hungry and we didn't follow up on networking or upcoming events - very unfortunate to my opinion and this will have an impact on future planning of our meetups. Because I rather would like to see more conversations during and at the end of our meetings than everyone just packing their laptops, bags and accessories and rush off to grab some food. I was hoping to get some information regarding this year's Code Challenge - supposedly to be organised during July? Maybe someone could leave a comment on that - but I couldn't get any updates. Well, I'll keep digging... In case that you would like to get more into git and how to use it effectively, please check out Knowledge 7's upcoming course on "Effective git". Thanks Avinash for your vital input into today's conversation and I'm looking forward to get a grip on your book title very soon. My resume of the day Do not work in IT without any kind of version control system! Seriously, without a VCS in place you're doing it wrong. It's like driving a car without seat belts attached or riding your bike without safety helmet. You don't do that! End of discussion. ;-) Nowadays, having access to free (as in cost) tools to install on your machine and numerous online platforms to host your source code for free for up to five users it's a no-brainer to get yourself familiar with VCS. Today's sessions gave a good overview on how to start using git and how to connect to various remote services like github or VSO.

    Read the article

  • What Raid should I use for Website Static Files / Content

    - by Simon
    I'm building a Web server (IIS7) and would like to know the best practice for storing static content and the uploaded files of website's users (predominantly pictures, but also other documents like pdf's). I will keep the operating System on a Raid 1 array. Where should I be keeping the actual website's pages & files, it's own static content, and that of it's users? Should I be placing this content on a seperate raid array, and if so which type? I was considering using SLC SSD's (Such as the Intel's X25-e) but the following issues came to light. Will the SLC SSD's give any improvement over a 2.5" 15k SAS Drive for this type of content? If I did use SSD's, I'm under the belief I would still need to use Raid for redundancy, yet I've heard Intel X25-e's don't support TRIM. Does this scrap them as a legitimate option?

    Read the article

  • MaxClients, Server Limits etc

    - by Moe
    Hello, I'm having some problems with my Server. It's getting quite a bit of traffic and is very slow, and sometimes inaccessible by my users. Here are the server specs: CPU: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz - 16 Processors RAM: 2GB The Values for the Apache Config are: StartServers: 5 MaxSpareServers: 10 MinSpareServers: 5 MaxClients: 150 ServerLimit: 256 MaxRequestsPerChild: 1000 KeepAlive: On KeepAliveTimeout: 5 MaxKeepAliveRequests: 100 TimeOut: 300 What would be optiminal values for a server of my configuration to support the maximum amount of users at a reasonable speed without killing the server! Thank you.

    Read the article

  • Issues with Server 2012 using DFSR running on Hyper-V 2012

    - by Bryan
    We have a number of Server 2012 systems, all of which run virtualised on Hyper-V 2012 server. We are having problems with two such virtual instances, both of which are used as file servers, whereby they occasionally stop responding to requests to serve files to clients. After logging on to the server, attempts to shut it down gracefully fail (no error, it just fails to acknowledge a shutdown request). Recovery is a case of power cycling the server(s) from the Hyper-V console. These two servers don't server a large number of users (one serves no more than 6 users, and the other serves around 20 users), they are in the same domain, but on different physical hardware (and at different sites). They don't lock up at the same time. They both use DFSR to replicate a fairly large amount of data between themselves (200GB) over ADSL connections, this is working fine, and we have been using DFSR to do this on the previous two generations of server OS we have used (Server 2008 R2 and Server 2003 - both of which were physical installs however). Today, when one of the servers crashed, I noticed an entry in the event log, which looked similar to the following: Log Name: Application Source: ESENT Date: 27/11/2012 10:25:55 Event ID: 533 Task Category: General Level: Warning Keywords: Classic User: N/A Computer: HAL-FS-01.example.com Description: DFSRs (1500) \\.\E:\System Volume Information\DFSR\database_C8CC_101_CC00_EC0E\ dfsr.db: A request to write to the file "\\.\E:\System Volume Information\ DFSR\database_C8CC_101_CC00_EC0E\fsr.log" at offset 4423680 (0x0000000000438000) for 4096 (0x00001000) bytes has not completed for 36 second(s). This problem is likely due to faulty hardware. Please contact your hardware vendor for further assistance diagnosing the problem. When the server started up again, I went to find the event log entry to investigate further and found that the event log entry was no longer there (I assume it was in memory but failed to write to disk before the server was powered off, for the reason mentioned in the message). I found the above message by searching back further in the event log. Both of these virtual servers have their E: volumes fully allocated as opposed to dynamically expanding, and there are no other issues on any of the other virtual servers (which include server 2012, server 2008 R2 and Ubuntu 12.04 x64). There are no signs of IO, memory or CPU starvation on the host systems. I've used performance counters on the affected virtual servers to monitor memory usage (including non paged pool usage), as well as CPU and network utilisation, and none of these show any signs of trouble when the issue arises. I would have thought our configuration isn't that uncommon, so I'm wondering if anyone else has seen this, and managed to resolve the problem?

    Read the article

  • Applying memory limits to screen sessions

    - by CollinJSimpson
    You can set memory usage limits for standard Linux applications in: /etc/security/limits.conf Unfortunately, I previously thought these limits only apply to user applications and not system services. This means that users can by bypass their limits by launching applications through a system service such as screen. I'd like to know if it's possible to let users use screen but still enforce application limits. Jeff had the great idea of using nohup which obeys user limits (wonderful!), but I would still like to know if it's possible to mimic the useful windowing features of screen. EDIT: It seems my screen sessions are now obeying my hard address space limits defined in /etc/security/limits.conf. I must have been making some mistake. I recently installed cpulimit, but I doubt that's the solution.Thanks for the nohup tip, Jeff! It's very useful. Link to CPU Limit package

    Read the article

  • SSH broken after hostname change on EC2-hosted Ubuntu

    - by dimadima
    I changed my instance's hostname using the hostname utility and then set it in /etc/hostname so that the new name survives reboot. My main motivation was for differentiating between instances at the prompt using the \h format in PS1. EDIT I also changed permissions on my home directory. I made my home directory group writeable. END EDIT Now I can no longer SSH into the machine. The short of it is the error Permission denied (publickey). Running ssh -v, the more verbose output is: debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/dmitry/.ssh/id_rsa debug1: Authentications that can continue: publickey debug1: Trying private key: /Users/dmitry/.ssh/ec2key.pem debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. Permission denied (publickey). Should I have done something after changing the hostname? Now I can't get into the instance! :(

    Read the article

  • Multi-Role Domain Controllers for Small Offices (< 50 clients)

    - by kce
    Warning: I'm a Linux/*NIX admin so this is all new to me. I understand that it's not considered a good idea to have only a single domain controller, and that it is also probably a good idea for a domain controller to only do AD/DHCP/DNS (Here). We have two offices, location A with 30 users and location B with 10 users. Our two offices are separated by a WAN that is not particularly robust so I have be instructed that we need to have standalone services in each office. This means that according to "best practices" we will need to build a domain controller and a separate file server in each office. Again, I am not knowledgeable in the ways of Windows but this seems a little unnecessary for an organization of 40 users. People have commented that I could "get away with" running file services on the domain controller as long as the "load is light". That just seems to generate more questions than it answers. What constitutes light load? What are the potential consequences of mixing these roles? Ideally I would prefer to only have one physical machine at each location. The one in location A (the location with IT staff) can act as the primary domain controller and the one in the smaller office can act as the backup domain controller. If either domain controller fails we can still use the other one for authentication (albeit with some latency) and if the WAN connection fails each office still has access to their respective "local" domain controller. If the file services are ALSO run on each server (and synchronized with something like DFS), a similar arrangement in terms of redundancy can be had without having to purchase, build and install two additional separate servers. It's not that I'm adverse to that (well, any more adverse than I am to whole thing to begin with) but to my simple mind it just seems, well a bit overkill. I can definitely see the benefits of functional separation when we're talking larger organizations, but I need to consider the additional overhead too. None of this excludes having a DRP setup for the domain controller/s. I assume you can lose two domain controllers just as easily as one.

    Read the article

  • Can connect to Samba server but cannot access shares?

    - by jlego
    I have setup a stand-alone box running Fedora 16 to use as a file-sharing and web development server. Needs to be able to share files with a PC running Windows 7 and a Mac running OSX Snow Leopard. I've setup Samba using the Samba configuration GUI tool. Added users to Fedora and connected them as Samba users (which are the same as the Windows and Mac usernames and passwords). The workgroup name is the same as the Windows workgroup. Authentication is set to User. I've allowed Samba and Samba client through the firewall and set the ethernet to a trusted port in the firewall. Both the Windows and Mac machines can connect to the server and view the shares, however when trying to access the shares, Windows throws error 0x80070035 " Windows cannot access \SERVERNAME\ShareName." Windows user is not prompted for a username or password when accessing the server (found under "Network Places"). This also happens when connecting with the IP rather than the server name. The Mac can also connect to the server and see the shares but when choosing a share gives the error "The original item for ShareName cannot be found." When connecting via IP, the Mac user is prompted for username and password, which when authenticated gives a list of shares, however when choosing a share to connect to, the error is displayed and the user cannot access the share. Since both machines are acting similarly when trying to access the shares, I assume it is an issue with how Samba is configured. smb.conf: [global] workgroup = workgroup server string = Server log file = /var/log/samba/log.%m max log size = 50 security = user load printers = yes cups options = raw printcap name = lpstat printing = cups [homes] comment = Home Directories browseable = no writable = yes [printers] comment = All Printers path = /var/spool/samba browseable = yes printable = yes [FileServ] comment = FileShare path = /media/FileServ read only = no browseable = yes valid users = user1, user2 [webdev] comment = Web development path = /var/www/html/webdev read only = no browseable = yes valid users = user1 How do I get samba sharing working? UPDATE: Before this box I had another box with the same version of fedora installed (16) and samba working for these same computers. I started up the old machine and copied the smb.conf file from the old machine to the new one (editing the share definitions for the new shares of course) and I still get the same errors on both client machines. The only difference in environment is the hardware and the router. On the old machine the router received a dynamic public IP and assigned dynamic private IPs to each device on the network while the new machine is connected to a router that has a static public IP (still dynamic internal IPs though.) Could either one of these be affecting Samba? UPDATE 2: As the directory I am trying to share is actually an entire internal disk, I have tried to things: 1.) changing the owner of the mounted disk from root to my user (which is the same username as on the Windows machine) 2.) made a share that only included one of the folders on the disk instead of the entire disk with my user again as the owner. Both tests failed giving me the same errors regarding the network address. UPDATE 3: Not sure exactly what I did, but now whenever I try to connect to the share on the Windows 7 client I am prompted for my username and password. When I enter the correct credentials I get an access denied message. However I did notice that under the login box "domain: WINDOWS-PC-NAME" is listed. I believe this could very well be the problem. Any suggestions? UPDATE 4: So I've completely reinstalled Fedora and Samba now. I've created a share on the first harddrive (one fedora is installed on) and I can access that fine from Windows. However when I try to share any data on the second disk, I am receiving the same error. This I believe is the problem. I think I need to change some things in fstab or fdisk or something. UPDATE 5: So in fstab I mapped the drive to automount in a folder which works correctly. I also added the samba_share_t SElinux label to the mountpoint directory which now allows me to access the shares on the Windows machine, however I cannot see any of the files in the directory on the windows machine. (They are there, I can see them in the fedora file browser locally) UPDATE 6: Figured it out. See answer below

    Read the article

< Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >