Search Results

Search found 30316 results on 1213 pages for 'read the javadoc'.

Page 170/1213 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • Is There A Central Repository of Javascript Information?

    - by Brian
    For example, if you want information of PHP functions, you can go to http://www.php.net/ . If you want information of Perl functions you can to to http://www.cpan.org/ and/or use perldoc. If you want information on Java you can go to http://java.sun.com and/or use javadoc. However, if you want information on Javascript methods/functions and their attributes, return values, etc. where do you go? The reason I ask is I was playing with the "focus()" method and wondering if it could be passed any values or if it returned anything when called. I have done a cursory Google search but haven't found much. Does such a beast exist or am I out of luck? Thanks for reading, and have a good day.

    Read the article

  • Training new employees on undocumented code

    - by glowcoder
    Our company has a large codebase (2500+ classes/interfaces in just the core alone, many more in other projects) for our flagship software product. We've never really hired more than one developer at a time, so we don't have a real training process. We're going to be bringing in 2-5 more developers now, and probably more in the near future (to put things into perspective, we have 7 right now.) Obviously, we would like to get these guys up to speed as soon as possible. The catch - almost all of our classes (95%+) are completely undocumented. No javadoc, no design docs, basically completely undocumented. What strategies can we employ to bring the new developers up to speed? I'd like to consider situations that include the existing code getting documented, but it's possible management won't allow for the time to get that done, so I also must consider situations where that won't happen. Has anyone been there before? What worked well for you? Thanks!

    Read the article

  • PgJDBC: "no suitable driver found" when following tutorial, why?

    - by Celeritas
    I'm writing a Java program that queries a PostgreSQL database. I'm following this example and have trouble here: connection = DriverManager.getConnection( "jdbc:postgresql://127.0.0.1:5432/testdb", "mkyong", "123456"); According to the JavaDoc for DriverManager the first string is "a database url of the form jdbc:subprotocol:subname. When I connect to the server I type in psql -h dataserv.abc.company.com -d app -U emp24 and give the password qwe123 (for example sake). What should the first argument of getConnection be? I've tried connection = DriverManager.getConnection( "jdbc:postgresql://dataserv.abc.company.com", "emp24", "qwe123"); and get the run time error: no suitable driver found. I've download JDBC4 Postgresql Driver, Version 9.2-1000.

    Read the article

  • UML and Documenting Simple Diagrams

    - by Jason
    As part of a rewrite of an old Java application into C#, I'm writing an actual Software Design Specification. A problem I run into is when a method is too simple to bother with a Sequence Diagram (it doesn't interact with other objects). As an example, I have a simple POJO called Item, containing the following method: public String getCategoryKey() { StringBuffer value = new StringBuffer("s-"); value.append(this.getModelID()); value.append("-c"); return value; } The purpose and the algorithm for the method needs to be documented. However, a sequence diagram is overkill. How would others document it? (I take no credit/blame for the given method, it's very old code and the author "forgot" to put their name in the Javadoc).

    Read the article

  • Python as your main language. Possible?

    - by Deinumite
    I am currently attending college and the languages that I will 'know' by graduation are C++ and Java. That being said, i am also in the process of teaching myself Python. I know that every programming language has its own pros and cons, but would it be possible to become a python developer out of school? I always have more 'fun' programming in Python than i do in C++ or Java, and I am also in love with Pythons documentation. I know C++ will always be on top in terms of speed, but what would be the benefit of memorizing every javadoc against focusing on Python instead? are there good jobs to be had with Python? edit: also, would it be beneficial for me to look at C# as well? Microsoft is really throwing their support at it so that could be a decent career path as well.

    Read the article

  • Java: How to get Unicode name of a character (or its type category)?

    - by java.is.for.desktop
    Hello, everyone! The Character class in Java defines methods which check a given char argument for equality with certain Unicode chars or for belonging to some type category. These chars and type categories are named. As stated in given javadoc, examples for named chars are HORIZONTAL TABULATION, FORM FEED, ...; example for named type categories are SPACE_SEPARATOR, PARAGRAPH_SEPARATOR, ... However, being byte or int values instead of enums, the name of these types are "hidden" at runtime. So, is there a possibility to get characters' and/or type categories' names at runtime?

    Read the article

  • Desktop SATA drives in SATA <-> FC array

    - by chris
    Let's assume you've got a box like one of these with space for 24 SATA disks. What are the best bits of advice for deploying this? For instance, should you be greedy and go for the 1.5 or 2tb disks or are they just not reliable enough to be used in an array like this and you should stick with 640gb or 750gb disks instead? Also, I know that FC (or generically, "enterprise class") disks have a different error recovery strategy than desktop disks. An enterprise disk will fail a read quickly and report to the controller that it wasn't able to read that block, and the RAID controller will quickly regenerate the info from the parity disk and mark the block as bad. A desktop disk, on the other hand, will try and try and try again to get the data, and in pathological cases this may cause a raid controller to fail the whole disk because the read operation times out. So there are a couple aspects to this question: What's the best sort of disk to get today? (ie specific disks on the market in Feb 2010) Generically, what should someone look for when trying to buy something like this that kinda walks the line between enterprise and consumer? Lastly -- is there anything that can be done with current "consumer" disks to make them more suitable for array use? IE can you use a SMART configuration to change the error recovery strategy used by the disk? Thanks!

    Read the article

  • Explain why folder's permissions differ depending on HOW user is accessing server AFP vs SSH

    - by Meltemi
    Hoping someone can explain what is probably fairly obvious...but confuses me. Imagine two users with admin privileges on our server (Mac OS X Server 10.5). Call them joe & bob. both users are members of these groups: Staff Group ID: 20 Workgroup Group ID: 1025 Shared folder "devfolder" has sharing set as so: POSIX: Owner: joe read & write Group: admin read & write Other no access ACL: Workgroup Allow Read & write Question is why when looking at same folder does the ownership appear to change depending on who's doing the looking?!? Both looking at same folder on the server: From Joe's perspective: xserve:devfolder joe$ ls -l drwxrwxr-x 6 joe workgroup 204 May 20 19:32 app drwxrwxr-x 9 joe workgroup 306 May 20 19:32 config drwxrwxr-x 3 joe workgroup 102 May 20 19:32 db drwxrwxr-x 3 joe workgroup 102 May 20 19:32 doc drwxrwxr-x 3 joe workgroup 102 May 20 19:32 lib And from Bob's perspective (folder mounted on his machine via AFP): bobmac:devfolder bob$ ls -l drwxrwxr-x 6 bob _bob 264 May 20 19:32 app drwxrwxr-x 9 bob _bob 264 May 20 19:32 config drwxrwxr-x 3 bob _bob 264 May 20 19:32 db drwxrwxr-x 3 bob _bob 264 May 20 19:32 doc drwxrwxr-x 3 bob _bob 264 May 20 19:32 lib Now if Bob connects to server via SSH then his output is identical to Joe's, as expected. Can anyone tell me what the client is doing in this case and what should be expected when bob creates or updates files in this folder? What tools do I have to better understand this from the command line? Is this normal? Perhaps a "cleaner" way that wouldn't be confusing with "bob _bob"?!?

    Read the article

  • PostgreSQL lots of writes

    - by strife911
    Hi, I am using postgreSQL for a scientific application (unsupervised clustering). The python program is multi-threaded so that each thread manages its own postmaster process (one per core). Hence, their is a lot of concurrency. Each thread-process loop infinitely though two SQL queries. The first is for reading, the second is for writing. The read operation considers 500 time the amount of rows the write operation considers. Here is the output of dstat: ----total-cpu-usage---- ------memory-usage----- -dsk/total- --paging-- --io/total- usr sys idl wai hiq siq| used buff cach free| read writ| in out | read writ 4 0 32 64 0 0|3599M 63M 57G 1893M|1524k 16M| 0 0 | 98 2046 1 0 35 64 0 0|3599M 63M 57G 1892M|1204k 17M| 0 0 | 68 2062 2 0 32 66 0 0|3599M 63M 57G 1890M|1132k 17M| 0 0 | 62 2033 2 1 32 65 0 0|3599M 63M 57G 1904M|1236k 18M| 0 0 | 80 1994 2 0 31 67 0 0|3599M 63M 57G 1903M|1312k 16M| 0 0 | 70 1900 2 0 37 60 0 0|3599M 63M 57G 1899M|1116k 15M| 0 0 | 71 1594 2 1 37 60 0 0|3599M 63M 57G 1898M| 448k 17M| 0 0 | 39 2001 2 0 25 72 0 0|3599M 63M 57G 1896M|1192k 17M| 0 0 | 78 1946 1 0 40 58 0 0|3599M 63M 57G 1895M| 432k 15M| 0 0 | 38 1937 I am pretty sure I could write more often than that for I have seen it write up to 110-140M on dstat. How can I optimize this process?

    Read the article

  • Memcached session manager in Azure: Connection gets forcibly closed

    - by Edgar Pérez
    I am using Memcached Session Manager to handle Tomcat sessions in non-sticky mode. My deployment in Azure consists of a Worker Role with two instances which connect to an Azure VM running my Memcached server. Everything works pretty well, my session is persisted and retrieved by any of the two instances transparently. The problem arises when the session is idle for about 4 minutes; everything points out that the Azure Loadbalancer is closing the spymemcached connection to the VM after some period of inactivity. My MSM configuration is this: <Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager" memcachedNodes="n1:my-azure-vm.cloudapp.net:11211" sticky="false" sessionBackupAsync="false" sessionBackupTimeout="10000" lockingMode="uriPattern:/path1|/path2" requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js|ttf|eot|svg|woff)$" transcoderFactoryClass="de.javakaffee.web.msm.serializer.kryo.KryoTranscoderFactory" customConverter="de.javakaffee.web.msm.serializer.kryo.HibernateCollectionsSerializerFactory"/> The stacktrace printed by the spymemcached client is this: INFO net.spy.memcached.MemcachedConnection: Reconnecting due to exception on {QA sa=/10.194.132.206:13000, #Rops=1, #Wops=0, #iq=0, topRop=net.spy.memcached.protocol.binary.StoreOperationImpl@1d95da8, topWop=null, toWrite=0, interested=1} java.io.IOException: An existing connection was forcibly closed by the remote host at sun.nio.ch.SocketDispatcher.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(Unknown Source) at sun.nio.ch.IOUtil.readIntoNativeBuffer(Unknown Source) at sun.nio.ch.IOUtil.read(Unknown Source) at sun.nio.ch.SocketChannelImpl.read(Unknown Source) at net.spy.memcached.MemcachedConnection.handleReads (MemcachedConnection.java:303) at net.spy.memcached.MemcachedConnection.handleIO (MemcachedConnection.java:264) at net.spy.memcached.MemcachedConnection.handleIO (MemcachedConnection.java:184) at net.spy.memcached.MemcachedClient.run(MemcachedClient.java:1298) Given this idle time limitation in Azure, is there any other way to make MSM work in the azure cloud?

    Read the article

  • Laptop Asus P50IJ with Intel 4500M GMA output going to a Dell 1907FP external monitor will not allow

    - by ProfessionalAmateur
    Hello - I just purchased an Asus P50IJ-X2 laptop which has a Intel GMA 4500M video card running Windows7. At work I output this laptop to a Dell 1907FP LCD which has a maximum resolution of 1280x1024. Not matter what I do the Windows will not allow the laptop to set a resolution higher than 1024x768 to this LCD monitor. Ive even gone to the extent of downloading PowerStrip (I'd post a link but Im new and can only enter 1 url, if you google for powerstrip its the first option) to create a custom driver for my monitor thinking Windows was having a hard time seeing the available resolutions it would accept. However, powerstrip read the registery and properly sees the monitor and what its capable of so Im now at a complete loss as to why Windows7 will not allow me to set/use a 1280x1024 resolution for this external monitor (as my last laptop did running Vista). The Intel documentation (http://software.intel.com/en-us/articles/quick-reference-guide-to-intel-integrated-graphics/) indicates that the GMA 4500M should be able to run up to a 2560x1600 max res. The Dell 1907FP specification states it can run up to a 1280x1024 res. But no matter what the computer will not allow me to set higher than a 1024x768. I'm completely baffled but I would really like to be able to output this laptop to a reasonable resolution, 1024x768 makes me feel like I'm using my mom's computer. Any help would be greatly appreciated! Here are some attached images (I apologize for the links, being new I cannot post images) that should help explain this better: Image 1 - This image is from powerstrip which shows the monitors max accepted resolution and at the top right the max res my PC currently allows. (http://imgur.com/agrno.png) Image 2 - This shows my Windows7 resolution picker. (http://imgur.com/3nv6q.png) Image 3 - The 'List all modes' option taken from the Screen Resolution Advanced Settings List All Modes. (http://imgur.com/AMREh.png) Image 4 - Monitor information from registry read by powerstrip, this shows the laptop is able to read the necessary info from the LCD monitor. (http://imgur.com/hUX4D.png)

    Read the article

  • How do I enable the confluence-users group?

    - by M. Joanis
    I've got an issue with Atlassian Confluence. Normal users can't log in, but administrators can... Details below! I manage users using an Apple Open Directory (LDAP). I created two groups: "confluence-administrators" and "confluence-users". I've added team leaders and managers to both groups, and I've added some users to "confluence-users". Everyone in "confluence-administrators" can log in easily. People in "confluence-users" can't log in at all. When I look at the user list (in Confluence), and select a user to examine the list of groups he or she belongs to, I can see that the Confluence Administrators are indeed members of the "confluence-administrators" group, but not a single user is a member of the "confluence-users" group. Not event the Confluence Administrators, which are members of both groups! So I tried to have one of the "confluence-users" log in while watching the Confluence logs. Here's the result: 2012-07-05 14:50:19,698 ERROR [http-8090-11] [core.event.listener.AutoGroupAdderListener] handleEvent Could not auto add user to group: Group <confluence-users> is read-only and cannot be updated at com.atlassian.crowd.directory.DbCachingRemoteDirectory.addUserToGroup(DbCachingRemoteDirectory.java:461) ... So it says the group group is read-only... I'm not sure why it is a problem. Well confluence-administrators too is read-only and it doesn't complain. Some things I don't think are part of the problem: I've synchronized Confluence with LDAP many, many times. I have verified many times that I didn't make a typo while setting the groups on the LDAP server. LDAP synchronization goes well. No errors in the logs (only INFO level log messages). The user exists. Errors in the logs are different when a user doesn't exist. Any help is most welcome!

    Read the article

  • "Windows detected a hard drive" issue in Windows 7 x64

    - by Jasiu
    I upgraded to the OCZ-Agility3 120GB from a 60 OCZ Vertex2 SSD. I cloned the drive from the Vertex to the new Agility. Everything seemed to have gone well and have not had any problems. Recently in the passed month I have gotten this error: I downloaded teh OCZToolboxMP and ran the SMART utility and don't see anything wrong: SMART READ DATA ModelNumber : OCZ-AGILITY3 Serial Number : OCZ-Y1945X77438P4NU6 WWN : 5-e8-3a-97 ebea5ba76 Revision: 10 Attributes List 1: SSD Raw Read Error Rate Normalized Rate: 70 total ECC and RAISE errors 5: SSD Retired Block Count Reserve blocks remaining: 100% 9: SSD Power-On Hours Total hours power on: 968 12: SSD Power Cycle Count Count of power on/off cycles: 28 171: SSD Program Fail Count Total number of Flash program operation failures: 0 172: SSD Erase Fail Count Total number of Flash erase operation failures: 0 174: SSD Unexpected power loss count Total number of unexpected power loss: 11 177: SSD Wear Range Delta Delta between most-worn and least-worn Flash blocks: 0 181: SSD Program Fail Count Total number of Flash program operation failures: 0 182: SSD Erase Fail Count Total number of Flash erase operation failures: 0 187: SSD Reported Uncorrectable Errors Uncorrectable RAISE errors reported to the host for all data access: 4145 194: SSD Temperature Monitoring Current: 30 High: 30 Low: 30 195: SSD ECC On-the-fly Count Normalized Rate: 120 196: SSD Reallocation Event Count Total number of reallocated Flash blocks: 100 201: SSD Uncorrectable Soft Read Error Rate Normalized Rate: 120 204: SSD Soft ECC Correction Rate (RAISE) Normalized Rate: 120 230: SSD Life Curve Status Current state of drive operation based upon the Life Curve: 100 231: SSD Life Left Approximate SDD life Remaining: 100% 241: SSD Lifetime writes from host lifetime writes 893 GB 242: SSD Lifetime reads from host lifetime reads 968 GB Does anyone have any ideas of what might be wrong and or how I can go about fixing this? Please let me know if there is other information I can provide. Thanks for your help Windows 7 x64 SP1 AMD Phenom II X4 940 8GB RAM

    Read the article

  • ZFS - zpool ARC cache plus L2ARC benchmarking

    - by jemmille
    I have been doing lots of I/O testing on a ZFS system I will eventually use to serve virtual machines. I thought I would try adding SSD's for use as cache to see how much faster I can get the read speed. I also have 24GB of RAM in the machine that acts as ARC. vol0 is 6.4TB and the cache disks are 60GB SSD's. The zvol is as follows: pool: vol0 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM vol0 ONLINE 0 0 0 c1t8d0 ONLINE 0 0 0 cache c3t5001517958D80533d0 ONLINE 0 0 0 c3t5001517959092566d0 ONLINE 0 0 0 The issue is I'm not seeing any difference with the SSD's installed. I've tried bonnie++ benchmarks and some simple dd commands to write a file then read the file. I have run benchmarks before and after adding the SSD's. I've ensured the file sizes are at least double my RAM so there is no way it can all get cached locally. Am I missing something here? When am I going to see benefits of having all that cache? Am I simply not under these circumstances? Are the benchmark programs not good for testing the effect of cache because of the the way (and what) it writes and reads?

    Read the article

  • MSI Installer error 2203; how to force permissions on installer directory?

    - by goober
    [Cross-Posted on StackOverflow.com as well because the question relates to development. Feel free to let me know where it best belongs.] Hi all, I'll try to bullet-point to keep it short: Background / Issue Trying to install ASP.NET MVC 3 RC on my Windows 7 machine. Uninstalled other versions of MVC (2 and 3 Beta 1). Ran the installer -- got a generic error, 2203. Log files said that it was a permissions error on C:\Windows\Installer. Checked C:\Windows\Installer -- sure enough, it's marked as read-only. I un-checked "Read-Only" in the folder properties and applied. It appears to open the dialog and apply to all files. However, when clicking properties again, the read-only box is backed to checked. Checked the security tab of the folder -- both system and the Administrators group have full access. I checked ownership -- the Administrators group is listed as an owner. Verified that I'm in the system as an Administrator (in fact, the only account in the Administrators group besides Administrator). So, what gives? Thanks in advance for any help you can provide!

    Read the article

  • Strange performance issue with Dell R7610 and LSI 2208 RAID controller

    - by GregC
    Connecting controller to any of the three PCIe x16 slots yield choppy read performance around 750 MB/sec Lowly PCIe x4 slot yields steady 1.2 GB/sec read Given same files, same Windows Server 2008 R2 OS, same RAID6 24-disk Seagate ES.2 3TB array on LSI 9286-8e, same Dell R7610 Precision Workstation with A03 BIOS, same W5000 graphics card (no other cards), same settings etc. I see super-low CPU utilization in both cases. SiSoft Sandra reports x8 at 5GT/sec in x16 slot, and x4 at 5GT/sec in x4 slot, as expected. I'd like to be able to rely on the sheer speed of x16 slots. What gives? What can I try? Any ideas? Please assist Cross-posted from http://en.community.dell.com/support-forums/desktop/f/3514/t/19526990.aspx Follow-up information We did some more performance testing with reading from 8 SSDs, connected directly (without an expander chip). This means that both SAS cables were utilized. We saw nearly double performance, but it varied from run to run: {2.0, 1.8, 1.6, and 1.4 GB/sec were observed, then performance jumped back up to 2.0}. The SSD RAID0 tests were conducted in a x16 PCIe slot, all other variables kept the same. It seems to me that we were getting double the performance of HDD-based RAID6 array. Just for reference: maximum possible read burst speed over single channel of SAS 6Gb/sec is 570 MB/sec due to 8b/10b encoding and protocol limitations (SAS cable provides four such channels).

    Read the article

  • 2 Server FC SAN Configuration

    - by BSte
    I have 2 identical servers: -48GB Ram -8GigE NIC's -2FC NIC's -2x72GB RAID1 Hard Drives -Server 2008R2 Host I also Have a Fibre Channel SAN: -16x146GB RAID10 Hard Drives -2xDual-port FC Controllers (Controller A and B both have ports 1 and 2) -Server 1 has Fiber to Ports A1 and B1 -Server 2 has Fiber to Ports A2 and B2 -I kept the default config with 1 Virtual Disk and 1 Volume -The default mappings show ports A1,A2,B1,B2 on LUN 0 with read-write My goal is: -2xVM's with IIS and Guest Level Failover -2xVM's with SQL 2008 Enterprise using a Single DB and Guest Level Failover -1xVM that is an application server, preferable with Host Failover. From what I read, this will also need AD for clustering to work. -I need at least 1 VM always running for IIS and the SQLDB. This includes hardware failover and application (ie: reboot a VM for Critical updates) I was told I could install the VM's and run them from the SAN, and this is what I've tried: Installed MPIO and HyperV on Server1 and Server 2 Added the SAN as Disk E: on both servers, made it GPT and formatted NTFS Configured HyperV on both server to store use E:\VD and E:\VHD On server1, I was able to install 3 VM's on the SAN and all worked well. On server2, I would start installing the other 2 VM's, but always at some point the VM's would get a corrupt .VHD message (either server). Everything I found about the message typically related to antivirus, so I removed all antivirus on both Host servers (now only running 2008R2). I reformatted drive E: (SAN), recreated the VHD and VD directories, installed 3 VM's on Server 1, and then had the same issue when installing VM's on Server2. Obviously something is wrong, but I'm not certain what exactly. My questions: 1) Are my goals possible with this hardware setup? -I've read 2008R2 supports FC SAN's, but a lot of articles seem to only give examples with iSCSCI setups 2) What would be the suggested route on setting up the SAN (disks,volumes,LUN's)? I've worked with HyperV on a single machine before and never had issues. Actual experience working on SAN's and clustering is new to me. Any suggestions or recommendations to get me in the right direction would be much appreciated.

    Read the article

  • Resizing a LUKS encrypted volume

    - by mgorven
    I have a 500GiB ext4 filesystem on top of LUKS on top of an LVM LV. I want to resize the LV to 100GiB. I know how to resize ext4 on top of an LVM LV, but how do I deal with the LUKS volume? mgorven@moab:~% sudo lvdisplay /dev/moab/backup --- Logical volume --- LV Name /dev/moab/backup VG Name moab LV UUID nQ3z1J-Pemd-uTEB-fazN-yEux-nOxP-QQair5 LV Write Access read/write LV Status available # open 1 LV Size 500.00 GiB Current LE 128000 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 2048 Block device 252:3 mgorven@moab:~% sudo cryptsetup status backup /dev/mapper/backup is active and is in use. type: LUKS1 cipher: aes-cbc-essiv:sha256 keysize: 256 bits device: /dev/mapper/moab-backup offset: 3072 sectors size: 1048572928 sectors mode: read/write mgorven@moab:~% sudo tune2fs -l /dev/mapper/backup tune2fs 1.42 (29-Nov-2011) Filesystem volume name: backup Last mounted on: /srv/backup Filesystem UUID: 63877e0e-0549-4c73-8535-b7a81eb363ed Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 32768000 Block count: 131071616 Reserved block count: 0 Free blocks: 112894078 Free inodes: 32044830 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 992 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 RAID stride: 128 RAID stripe width: 128 Flex block group size: 16 Filesystem created: Sun Mar 11 19:24:53 2012 Last mount time: Sat May 19 13:29:27 2012 Last write time: Fri Jun 1 11:07:22 2012 Mount count: 0 Maximum mount count: 100 Last checked: Fri Jun 1 11:03:50 2012 Check interval: 31104000 (12 months) Next check after: Mon May 27 11:03:50 2013 Lifetime writes: 118 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 383bcbc5-fde9-4720-b98e-2d6224713ecf Journal backup: inode blocks

    Read the article

  • Fedora 16 can connect to samba share using smbclient but not in nautilus 3.2.1

    - by Nathan Jones
    I have a machine running Ubuntu 11.10 Server acting as a Samba server to share my home directory. Everything works fine on my Windows 7 machine, but on my Fedora 16 laptop, if I use Nautilus to try to access the share using smb://192.168.0.8/nathan in the location bar, it just has the loading cursor and does nothing. It never shows any errors, nothing. Using smbclient works just fine, but I'd like to get it working in Nautilus. I know that there can be problems with SELinux and Samba, so I created a file called booleans.local that contains samba_enable_home_dirs=1. My smb.conf file looks like this: # For Unix password sync to work on a Debian GNU/Linux system, the following # parameters must be set (thanks to Ian Kahan <<[email protected]> for # sending the correct chat script for the passwd program in Debian Sarge). passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . # This boolean controls whether PAM will be used for password changes # when requested by an SMB client instead of the program listed in # 'passwd program'. The default is 'no'. pam password change = yes # This option controls how unsuccessful authentication attempts are mapped # to anonymous connections map to guest = bad user ########## Domains ########### # Is this machine able to authenticate users. Both PDC and BDC # must have this setting enabled. If you are the BDC you must # change the 'domain master' setting to no # ; domain logons = yes # # The following setting only takes effect if 'domain logons' is set # It specifies the location of the user's profile directory # from the client point of view) # The following required a [profiles] share to be setup on the # samba server (see below) ; logon path = \\%N\profiles\%U # Another common choice is storing the profile in the user's home directory # (this is Samba's default) # logon path = \\%N\%U\profile # The following setting only takes effect if 'domain logons' is set # It specifies the location of a user's home directory (from the client # point of view) ; logon drive = H: # logon home = \\%N\%U # The following setting only takes effect if 'domain logons' is set # It specifies the script to run during logon. The script must be stored # in the [netlogon] share # NOTE: Must be store in 'DOS' file format convention ; logon script = logon.cmd # This allows Unix users to be created on the domain controller via the SAMR # RPC pipe. The example command creates a user account with a disabled Unix # password; please adapt to your needs ; add user script = /usr/sbin/adduser --quiet --disabled-password --gecos "" %u # This allows machine accounts to be created on the domain controller via the # SAMR RPC pipe. # The following assumes a "machines" group exists on the system ; add machine script = /usr/sbin/useradd -g machines -c "%u machine account" -d /var/lib/samba -s /bin/false %u # This allows Unix groups to be created on the domain controller via the SAMR # RPC pipe. ; add group script = /usr/sbin/addgroup --force-badname %g ########## Printing ########## # If you want to automatically load your printer list rather # than setting them up individually then you'll need this # load printers = yes # lpr(ng) printing. You may wish to override the location of the # printcap file ; printing = bsd ; printcap name = /etc/printcap # CUPS printing. See also the cupsaddsmb(8) manpage in the # cupsys-client package. ; printing = cups ; printcap name = cups ############ Misc ############ # Using the following line enables you to customise your configuration # on a per machine basis. The %m gets replaced with the netbios name # of the machine that is connecting ; include = /home/samba/etc/smb.conf.%m # Most people will find that this option gives better performance. # See smb.conf(5) and /usr/share/doc/samba-doc/htmldocs/Samba3-HOWTO/speed.html # for details # You may want to add the following on a Linux system: # SO_RCVBUF=8192 SO_SNDBUF=8192 # socket options = TCP_NODELAY # The following parameter is useful only if you have the linpopup package # installed. The samba maintainer and the linpopup maintainer are # working to ease installation and configuration of linpopup and samba. ; message command = /bin/sh -c '/usr/bin/linpopup "%f" "%m" %s; rm %s' & # Domain Master specifies Samba to be the Domain Master Browser. If this # machine will be configured as a BDC (a secondary logon server), you # must set this to 'no'; otherwise, the default behavior is recommended. # domain master = auto # Some defaults for winbind (make sure you're not using the ranges # for something else.) ; idmap uid = 10000-20000 ; idmap gid = 10000-20000 ; template shell = /bin/bash # The following was the default behaviour in sarge, # but samba upstream reverted the default because it might induce # performance issues in large organizations. # See Debian bug #368251 for some of the consequences of *not* # having this setting and smb.conf(5) for details. ; winbind enum groups = yes ; winbind enum users = yes # Setup usershare options to enable non-root users to share folders # with the net usershare command. # Maximum number of usershare. 0 (default) means that usershare is disabled. ; usershare max shares = 100 # Allow users who've been granted usershare privileges to create # public shares, not just authenticated ones usershare allow guests = yes #======================= Share Definitions ======================= # Un-comment the following (and tweak the other settings below to suit) # to enable the default home directory shares. This will share each # user's home director as \\server\username [homes] comment = Home Directories browseable = yes # By default, the home directories are exported read-only. Change the # next parameter to 'no' if you want to be able to write to them. read only = no # File creation mask is set to 0700 for security reasons. If you want to # create files with group=rw permissions, set next parameter to 0775. ; create mask = 0775 # Directory creation mask is set to 0700 for security reasons. If you want to # create dirs. with group=rw permissions, set next parameter to 0775. ; directory mask = 0775 # By default, \\server\username shares can be connected to by anyone # with access to the samba server. Un-comment the following parameter # to make sure that only "username" can connect to \\server\username # The following parameter makes sure that only "username" can connect # # This might need tweaking when using external authentication schemes valid users = %S # Un-comment the following and create the netlogon directory for Domain Logons # (you need to configure Samba to act as a domain controller too.) ;[netlogon] ; comment = Network Logon Service ; path = /home/samba/netlogon ; guest ok = yes ; read only = yes # Un-comment the following and create the profiles directory to store # users profiles (see the "logon path" option above) # (you need to configure Samba to act as a domain controller too.) # The path below should be writable by all users so that their # profile directory may be created the first time they log on ;[profiles] ; comment = Users profiles ; path = /home/samba/profiles ; guest ok = no ; browseable = no ; create mask = 0600 ; directory mask = 0700 [printers] comment = All Printers browseable = no path = /var/spool/samba printable = yes guest ok = no read only = no create mask = 0700 # Windows clients look for this share name as a source of downloadable # printer drivers [print$] comment = Printer Drivers path = /var/lib/samba/printers browseable = yes read only = yes guest ok = no # Uncomment to allow remote administration of Windows print drivers. # You may need to replace 'lpadmin' with the name of the group your # admin users are members of. # Please note that you also need to set appropriate Unix permissions # to the drivers directory for these users to have write rights in it ; write list = root, @lpadmin # A sample share for sharing your CD-ROM with others. ;[cdrom] ; comment = Samba server's CD-ROM ; read only = yes ; locking = no ; path = /cdrom ; guest ok = yes # The next two parameters show how to auto-mount a CD-ROM when the # cdrom share is accesed. For this to work /etc/fstab must contain # an entry like this: # # /dev/scd0 /cdrom iso9660 defaults,noauto,ro,user 0 0 # # The CD-ROM gets unmounted automatically after the connection to the # # If you don't want to use auto-mounting/unmounting make sure the CD # is mounted on /cdrom # ; preexec = /bin/mount /cdrom ; postexec = /bin/umount /cdrom smbusers: <nathan> = <"nathan"> Any help would be very much appreciated! Thanks!

    Read the article

  • Way to speed up load-balanced ssl using nginx?

    - by paulnsorensen
    So the setup for our website is 4 nodes running rails 3 and nginx 1 that all use the same GoDaddy certificate. Because we are a paid site, we have to maintain PCI-DSS compliance and thus have to use the more expensive SSL ciphers -- also we force SSL using Rack. I've recently switched over to Linode's NodeBalancer (which I've read is an HACluster), and we're not getting the performance we'd ideally like. From what I've read, it looks like terminating the SSL on the nodes using the high cipher is what is causing the poor performance, but I'd like to be thorough. Is there anything I can do? I've read about other ways to terminate the SSL before the NodeBalancer (like using stud), but I don't know enough about these solutions. We certainly don't want to do anything experimental or anything that has a single point of failure. If there really isn't anything I can do to speed up the SSL handshake, my alternative would be to support certain pages on Rails using a secure and insecure subdomain. I've found a few guides that walk through that, but my resulting question is in this situation, would it be better to have nginx handle forcing ssl on the secure subdomain instead of rails? Thanks!

    Read the article

  • Unable to authenticate to Windows Server 2003 for file browsing as non-administrator user.

    - by Fopedush
    I've got a windows server 2003 box containing a raid 5 array I use for mass storage. I want to set up a special non-administrator account that can be used to browse files over the network, with only read access. Ideally I'll map my network drive as this user to avoid accidentally hosing my data, and mount as an administrator user on occasions where I actually need write access. I've created a non-administrator user on the Windows Server box (called "ReadOnly)", and granted the user read permissions on the folders I need. However, when I try to browse to the files, and authenticate as this user, I'm told "Permission denied". If I throw the readOnly user into the administrators group, however, I can authenticate and browse just fine. I am, of course, only attempting to browse to folder for which I have given this user read permissions. Obviously my ReadOnly user is missing some privilege here, but I can't figure out what it is. I've been digging around in group policy editor all day to no avail. What am I missing? Fake Edit: I'm doing my browsing from a Windows 7 box, but I don't think that is relevant.

    Read the article

  • Is it the address bus size or the data bus size that determines "8-bit , 16-bit ,32-bit ,64-bit " systems?

    - by learner
    My simple understanding is as follows. Memory (RAM) is composed of bits, groups of 8 which form bytes, each of which can be addressed ,and hence byte addressable memory. Address Bus stores the location of a byte of memory. If an address bus is of size 32 bits, that means it can hold upto 232 numbers and it hence can refer upto 232 bytes of memory = 4GB of memory and any memory greater than that is useless. Data bus is used to send the value to be written to/read off the memory. If I have a data bus of size 32 bits, it means a maximum of 4 bytes can be written to/read off the memory at a time. I find no relation between this size and the maximum memory size possible. But I read here that: Even though most systems are byte-addressable, it makes sense for the processor to move as much data around as possible. This is done by the data bus, and the size of the data bus is where the names 8-bit system, 16-bit system, 32-bit system, 64-bit system, etc.. come from. When the data bus is 8 bits wide, it can transfer 8 bits in a single memory operation. When the data bus is 32 bits wide (as is most common at the time of writing), at most, 32 bits can be moved in a single memory operation. This says that the size of the data bus is what gives an OS the name, 8bit, 16bit and so on. What is wrong with my understanding?

    Read the article

  • SQL Server Windows-only Authentication Strategy problem

    - by Mike Thien
    I would like to use Windows-only Authentication in SQL Server for our web applications. In the past we've always created the all powerful 1 SQL Login for the web application. After doing some initial testing we've decided to create Windows Active Directory groups that mimic the security roles of the application (i.e. Administrators, Managers, Users/Operators, etc...) We've created mapped logins in SQL Server to these groups and given them access to the database for the application. In addition, we've created SQL Server database roles and assigned each group the appropriate role. This is working great. My issue revolves around that for most of the applications, everyone in the company should have read access to the reports (and hence the data). As far as I can tell, I have 2 options: 1) Create a read-only/viewer AD group and put everyone in it. 2) Use the "domain\domain users" group(s) and assign them the correct roles in SQL. What is the best and/or easiest way to allow everyone read access to specific database objects using a Windows-only Authentication method?

    Read the article

  • FreeNAS AFP Doesn't Authenticate

    - by Timothy R. Butler
    I just set up a FreeNAS 8.0.3 server and am trying to use its AFP (Netatalk) service to access it via a Mac OS X Lion system. I created the ZFS volume, set its permissions to include my user in its owner group (and set group write permissions), created an AFP share with AFP3 and told that share to "allow" @uninet (my group). I have a user on the server named tbutler, matching the user on my Mac. I can see the server, "Beatrice," in Finder. When I try to login in Finder using "Connect As...," the user "tbutler" and the proper password, I am returned to the main Finder window with the black bar now saying "Connection Failed." Here's the most recent data from /var/messages on the server, which shows me trying to login both as a "Registered User" and a "Guest": Jul 30 00:29:07 freenas afpd[8972]: AFP3.3 Login by nobody Jul 30 00:29:08 freenas afpd[8972]: AFP logout by nobody Jul 30 00:29:08 freenas afpd[8972]: dsi_stream_read: len:0, unexpected EOF Jul 30 00:29:08 freenas afpd[8972]: afp_over_dsi: client logged out, terminating DSI session Jul 30 00:29:08 freenas afpd[8972]: AFP statistics: 0.14 KB read, 0.12 KB written Jul 30 00:29:14 freenas afpd[8975]: AFP3.3 Login by tbutler Jul 30 00:29:14 freenas afpd[8975]: AFP logout by tbutler Jul 30 00:29:14 freenas afpd[8975]: dsi_stream_read: len:0, unexpected EOF Jul 30 00:29:14 freenas afpd[8975]: afp_over_dsi: client logged out, terminating DSI session Jul 30 00:29:14 freenas afpd[8975]: AFP statistics: 0.62 KB read, 0.48 KB written Jul 30 00:29:20 freenas afpd[8978]: AFP3.3 Login by tbutler Jul 30 00:29:20 freenas afpd[8978]: AFP logout by tbutler Jul 30 00:29:20 freenas afpd[8978]: dsi_stream_read: len:0, unexpected EOF Jul 30 00:29:20 freenas afpd[8978]: afp_over_dsi: client logged out, terminating DSI session Jul 30 00:29:20 freenas afpd[8978]: AFP statistics: 0.62 KB read, 0.48 KB written Jul 30 00:29:27 freenas afpd[8983]: AFP3.3 Login by nobody (My clock is clearly not properly set, but be that as it may...) Any suggestions? UPDATE: Apparently this problem occurs if one gives the AFP share a password in the AFP share settings box. When I removed the password and tried to login using a user account again, it worked just fine.

    Read the article

  • Karmic iptables missing kernel moduyles on OpenVZ container

    - by luison
    After an unsuccessful p2v migration of my Ubuntu server to an OpenVZ container which I am stack with I thought I would give a try to a reinstall based on a clean OpenVZ template for Ubuntu 9.10 (from the OpenVZ wiki) When I try to load my iptables rules on the VM machine I've been getting errors which I believe are related to kernel modules not being loaded on the VM from the /vz/XXX.conf template model. I've been testing with a few post I've found but I was stack with the error: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Could not load /lib/modules/2.6.24-10-pve/modules.dep: No such file or directory iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I read about the template not loading all iptables modules so I added modules to the XXX.conf of the VZ virtual machine like this: IPTABLES="ip_tables iptable_filter iptable_mangle ipt_limit ipt_multiport ipt_tos ipt_TOS ipt_REJECT ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_LOG ipt_length ip_conntrack ip_conntrack_ftp ip_conntrack_irc ipt_conntrack ipt_state ipt_helper iptable_nat ip_nat_ftp ip_nat_irc" As the error remained I read that I should build dependencies again on the virtual machine: depmod -a but this returned an error: WARNING: Couldn't open directory /lib/modules/2.6.24-10-pve: No such file or directory FATAL: Could not open /lib/modules/2.6.24-10-pve/modules.dep.temp for writing: No such file or directory So I read again about creating the directory empty and redoing "depmod -a" it. I now don't get the dependancies error but get this and I don't have a clue how to proceed: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Module ip_tables not found. iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I understand that iptables rules have to be different on the VM machine and perhaps some of the rules we are trying to apply (from our physical server) are not compatible but these are just source IP and destination port checks that I would like to be able to have available . I've heard that on the CentOS template there are no issues with this, so I understand is to do with VM config. Any help would be greatly appreciated.

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >