Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Unix System Calls

News Recommended Books Recommended Links Reference Dubeau. Beej Rusling
System V IPC  Filesystems Unix Kernel Linux Memory Management Gnu C library Process_control Memory allocation
Rochkind Burkett et all David Marshall Open Group Search Engine History Humor Etc

System calls are functions that a programmer can call to perform the services of the operating system. There are several online books that describe them at some length, for example Programming in C.

System calls can be roughly grouped into five major categories(System call - Wikipedia): process control, file management, device management, information maintenance and communication.

We will reproduce Wikipedia classification with some modifications:

  1. Process Control (see separate page Process control ).
  2. File management.
  3. Device Management.
  4. Information Maintenance.
  5. Communication.

Man pages should be used as a reference when you study Unix system calls.  The manual pages are divided into eight sections, with section 2 devoted to Unix system calls. They are organized as follows:

1. Commands This section provides information about user-level commands, such as ps and ls.

2. UNIX System Calls This section gives information about the library calls that interface with the UNIX operating system, such as open for opening a file, and exec for executing a program file. These are often accessed by C programmers.

3. Libraries This section contains the library routines that come with the system. An example library that comes with each system is the math library, containing such functions as fabs for absolute value. Like the system call section, this is relevant to programmers.

4. File Formats This section contains information on the file formats of system files, such as init, group, and passwd. This is useful for system administrators.

5. File Formats This section contains information on various system characteristics. For example, a manual page exists here to display the complete ASCII character set (ascii).

6. Games This section usually contains directions for games that came with the system.

7. Device Drivers This section contains information on UNIX device drivers, such scsi and floppy. These are usually pertinent to someone implementing a device driver, as well as the system administrator.

8. System Maintenance This section contains information on commands that are useful for the system administrator, such as how to format a disk.

Section 2 can be very useful as a reference. When you invoke the man command, the output is sent through what is known as a pager. This is a command that lets you view the text one page at a time. The default pager for most UNIX systems is the more command. You can, however, specify a different one by setting the PAGER environment variable.

The second source of information for a particulr call is Google. It usually can get you some useful links to the information for a particular call:

Some of them involve access to data that users must not be permitted to corrupt or even change.

It's often difficult to determine what is a library routine (e.g printf()), and what is a system call (e.g sleep()). They are used in the same way and the only way to tell is to remember which is which.

To obtain information about a system call or library routine, how to use it, what it returns, what it does etc., you can read the on-line manual. If you are looking for the manual on read, you can read the manual by doing: 

% man 2 read - if read is a system call, or % man 3 read - if read is a library routine

All of the entries in Section 2 of the manuals are system calls, and all of the entries in Section 3 are library routines; so if you don't know whether something is a system call, or a library routine, try looking it up in both Sections 2 and 3.

Here is an excerpt from Rochkind's book that introduces system calls, and explain how to use them:

The subject of this book is UNIX system calls, which form the interface between the UNIX kernel and the user programs that run on top of it. Those who interact only with commands, like the shell, text editors, and other application programs, may have little need to know much about system calls, but a thorough knowledge of them is essential for UNIX programmers. System calls are the only way to access kernel facilities such as the file system, the multitasking mechanisms, and the interprocess communication primitives.

System calls define what UNIX is. Everything else -- subroutines and commands -- is built on this foundation. While the novelty of many of these higher-level programs has been responsible for much of UNIX's renown, they could as well have been programmed on any modern operating system. When one describes UNIX as elegant, simple, efficient, reliable, and portable, one is referring not to the commands (some of which are none of these things), but to the kernel. How hard is it to learn UNIX system calls? When I first started programming UNIX, in 1973, it wasn't very hard at all. UNIX -- and its programmer's manual -- was only a fraction of its present size and complexity. There weren't any programming examples in the manual, but all of the source code was on-line and it was easy to read through programs like the shell or the editor to see how system calls worked. Perhaps most important, there were more experienced people around to ask for help. Even Dennis Ritchie and Ken Thompson, the inventors of UNIX, took time out to help me.

Today's aspiring UNIX programmers have a tougher challenge than I did. UNIX is now so widely dispersed that an expert is unlikely to be nearby. Most computers running UNIX are licensed for the object code only, so the source code for commands is unavailable. There are twice as many system calls now as there were in 1973, and the quality of the manual has deteriorated markedly from the days when Ritchie and Thompson did all the system call write-ups. It's now full of grotesque paragraphs like this:

If the set-user-ID mode bit of the new process file is set (see chmod(2)), exec sets the effective user ID of the new process to the owner ID of the new process file. Similarly, if the set-group-ID mode bit of the new process file is set, the effective group ID of the new process is set to the group ID of the new process file. The real user ID and the real group ID of the new process remain the same as those of the calling process.

As an old-timer I understood what this meant when I first saw it, but a newcomer is sure to be completely baffled. And until now, there's been nowhere to turn. This book's goal is to allow any experienced programmer to learn UNIX system calls as easily as I did, and then to use them wisely and portably. It's packed with examples -- over 3500 lines of C code. Instead of just tactics (how the system calls are used), I've tried also to include strategies (why and when they're used). And there's lots of informal advice as well, based on my experiences programming UNIX over the past dozen years.

Flavors of Unix

The number of different flavors of Unix is amazing, and what is worse, the system calls and their parameters change from flavor to flavors. One of the goals in writing Unix programs is to make them as portable as possible across all the flavors of Unix; obviously this isn't possible.

The number of system calls has quadrupled, more or less, depending on what you mean by "system call." The first edition of Advanced UNIX Programming focused on only about 70 genuine kernel system calls�for example, open, read, and write; but not library calls like fopen, fread, and fwrite. The second edition includes about 300. (There are about 1,100 standard function calls in all, but many of those are part of the Standard C Library or are obviously not kernel facilities.) 

However, most of the original 70 Unix system calls haven't changed, so if you try to use these, you should be all right.

Historically there were several variant of Unix calls:

Using System Calls

How does a C programmer actually issue a system call? There is no difference between a system call and any other function call. For example, the read system call might be issued like this:

     amt = read(fd, buf, numbytes);

The implementation of the subroutine read varies with the UNIX implementation. It is usually an assembly language program that uses a machine instruction designed specifically for system calls, which isn't directly executable from C. Nowadays, it's safe to assume that system calls are simply C subroutines. Remember, though, that since a system call involves a context switch (from user to kernel and back), it takes much longer than a simple subroutine call within a process's own address space. So avoiding excessive system calls might be a wise strategy for programs that need to be tightly optimized.

Most system calls return a value. In the read example above, the number of bytes read is returned. To indicate an error, a system call returns a value that can't be mistaken for valid data, namely -1 . Therefore, our read example should have been coded something like this:

    if ((amt= read(fd, buf, numbytes)) == -1)
     {
      printf("Read failed\n");
      exit(1);
     }

Note that exit is a system call too, but it can't return an error.

There are lots of reasons why a system call that returns -1 might have failed. The global integer errno contains a code that indicates the reason. These error codes are defined at the beginning of the system call chapter of the UNIX manual [the pages titled ``intro(2)'']. Note that errno contains valid data only if a system call actually returns -1; you can't use errno alone to determine whether an error occurred.

The library routine perror takes as its argument a string, and prints out the string, a colon, and a description of the error condition stored in errno. So, a way of handling the error above that gives the programmer more information is:

    if ((amt= read(fd, buf, numbytes)) == -1)
     {
      perror("read");
      exit(1);
     }

which might print out read: file does not exist on an error.

Reading System Call Man Pages

The manual pages for all Unix system calls give a declaration for the system call. This shows you what type of value the system call returns, what types of arguments it takes, and what header files you need to include before you can use the system call. As an example, here is part of the man page for the read() system call.

SYNOPSIS

     #include <unistd.h>
     #include <sys/types.h>
     #include <sys/uio.h>

     int
     read(int d, char *buf, int nbytes)

DESCRIPTION

Read() attempts to read nbytes of data from the object referenced by the file descriptor d into the buffer pointed to by buf.

RETURN VALUES

If successful, the number of bytes actually read is returned. Upon reading end-of-file, zero is returned. Otherwise, a -1 is returned and the global variable errno is set to indicate the error.

The first part shows what header files you need to include. Then the declaration of the system call is given.

     int read(int d, char *buf, int nbytes)

read() takes three arguments: an int which is called d in the man page, a pointer to a character called buf (usually an array of characters), and another int called nbytes. read() returns an int as its result.

The names of the arguments given in the man pages need not be the same as the ones you use in your programs, they are only to explain the function of the system call. For example, you could use the read() function in a program as follows:

  int main()
   {
    int i, count, desc;
    char array[500];

    desc=0; count=500;
    i=read(desc, array, count);
   }

Process-IDs and Process Groups

Every process has a process-ID, which is a positive integer. At any instant this is guaranteed to be unique. Every process but one has a parent. The exception is process 0, which is created and used by the kernel itself, for swapping.

A process's system data also records its parent-process-ID, the process-ID of its parent. If a process is orphaned because its parent has terminated, its parent-process-ID is changed to 1. This is the process-ID of the initialization process ( init), which is the ancestor of all other processes. In other words, the initialization process adopts all orphans.

Sometimes programmers choose to implement a subsystem as a group of related processes instead of as a single process. For example, a complex database management system might be broken down into several processes to gain additional concurrency of disk I/O. The UNIX kernel allows these related processes to be organized into a process group.

One of the group members is the group leader. Each member of the group has the group leader's process-ID as its process-group-ID. The kernel provides a system call to send a signal to each member of a designated process group. Typically, this would be used to terminate the entire group as a whole, but any signal can be broadcast in this way.

Any process can resign from its process group, become a leader of its own group (of one) by making its process-group-ID the same as its own process-ID, and then spawn child processes to round out the new group. Hence, a single user could be running, say, 10 processes formed into, say, three process groups.

A process group can have a control terminal, which is the first terminal device opened by the group leader. Normally, the control terminal for a user's processes is the terminal from which the user logged in. When a new process group is formed, the processes in the new group no longer have a control terminal.

The terminal device driver sends interrupt, quit, and hangup signals coming from a terminal to every process for which that terminal is the control terminal. Unless precautions are taken, hanging up a terminal, for example, will terminate all of the user's processes. To prevent this, a process can arrange to ignore hangups (this is what the nohup command does).

When a process group leader terminates for any reason, all processes with the same control terminal are sent a hangup signal, which, unless caught or ignored, terminates them too. This feature makes hard-wired terminals, which can't be physically hung up, behave like those that can. Thus, when a user logs off (terminating the shell, which is normally the process group leader), everything is cleaned up for the next user, just as it would be if the user actually hung up.

In summary, there are three process-IDs associated with each process:

Unix Permissions

A user-ID is a positive integer that is associated with a user's login name in the password file ( /etc/passwd). When a user logs in, the login command makes this ID the user-ID of the first process created, the login shell. Processes descended from the shell inherit this user-ID.

Users are also organized into groups (not to be confused with process groups), which have IDs too, called group-IDs. A user's login group-ID is taken from the password file and made the group-ID of his or her login shell.

Groups are defined in the group file ( /etc/group). While logged in, a user can change to another group of which he or she is a member; this changes the group-ID of the process that handles the request (normally the shell, via the newgrp command), which then is inherited by all descendent processes.

These two IDs are called the real user-ID and the real group-ID because they are representative of the real user, the person who is logged in. Two other IDs are also associated with each process: the effective user-ID and the effective group-ID. These IDs are normally the same as the corresponding real IDs, but they can be different, as we shall see shortly. For now, we'll assume the real and effective IDs are the same.

The effective ID is always used to determine permissions; the real ID is used for accounting and user-to-user communication. One indicates the user's permissions; the other indicates the user's identity.

Each file (ordinary, directory, or special) has, in its i-node, an owner user-ID and an owner group-ID. The i-node also contains three sets of three permission bits (nine bits in all). Each set has one bit for read permission, one bit for write per- mission, and one bit for execute permission. A bit is 1 if the permission is granted and 0 if not. There is a set for the owner, for the owner group, and for others (the public). Here are the bit assignments (bit 0 is the rightmost bit):

Permission bits are frequently specified using an octal number. For example, octal 775 would mean read, write, and execute permission for the owner and the group, and only read and execute permission for others. The ls command would show this combination of permissions as rwxrwxr-x; in binary it would be 111111101; in octal it would be 775.

The permission system determines whether a given process can perform a desired action (read, write, or execute) on a given file. For ordinary files the meaning of the actions is obvious. For directories the meaning of read is obvious, since directories are stored in ordinary files (the ls command reads a directory, for example). ``Write'' permission on a directory means the ability to issue a system call that would modify the directory (add or remove a link). ``Execute'' permission means the ability to use the directory in a path (sometimes called ``search'' permission). For special files, read and write permissions mean the ability to execute the read and write system calls. What, if anything, that implies is up to the designer of the device driver. Execute permission on a special file is meaningless.

The permission system determines whether permission will be granted using this algorithm:

  1. If the effective user-ID is zero, permission is instantly granted (the effective user is the superuser).
  2. If the process's effective user-ID and the file's user-ID match, then the owner set of bits is used to see if the action will be allowed.
  3. If the process's effective group-ID and the file's group-ID match, then the group set of bits is used.
  4. If neither the user-IDs nor group-IDs match, then the process is an ``other'' and the third set of bits is used.

Occasionally we want a user to temporarily take on the privileges of another user. For example, when we execute the passwd command to change our password, we would like the effective user-ID to be that of root (the traditional login name for the superuser), because only root can write into the password file. This is done by making root the owner of the passwd command (i.e., the ordinary file containing the passwd program), and then turning on another permission bit in the passwd command's i-node, called the set-user-ID bit. Executing a program with this bit on changes the effective user-ID to the owner of the file containing the program. Since it's the effective, rather than the real, user-ID that determines permissions, this allows a user to temporarily take on the permissions of someone else. The set-group-ID bit is used in a similar way.

Since both user-IDs (real and effective) are inherited from parent process to child process, it is possible to use the set-user-ID feature to run with an effective user-ID for a very long time.

System Calls to Get IDs

Here are the system calls to get the IDs mentioned above:

    int getuid()            /* Get the real user-ID */
                            /* Returns the ID */

    int getgid()            /* Get the real group-ID */
                            /* Returns the ID */

    int geteuid()           /* Get the effective user-ID */
                            /* Returns the ID */

    int getegid()           /* Get the effective group-ID */
                            /* Returns the ID */

    int getpid()            /* Get the process-ID */
                            /* Returns the ID */

    int getppid()           /* Get the parent process-ID */
                            /* Returns the ID */

    int getpgrp()           /* Get the process-group-ID */
                            /* Returns the ID */

Each of these system calls returns a single ID, as indicated by the comments following their function headers.

time System Call

    long time(timep)                      /* Get system time */
     long *timep;                         /* Pointer to time */

time returns the time, in seconds, since January I, 1970. If the argument timep is not NULL, the current time is stored into the long integer to which it points. This is a carry-over from the days before the C language supported long integers. It is of no use now that a simple assignment statement can be used to capture the return value. The argument to time should always be NULL. i.e value= time(NULL);


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

2005 2004 2003 2002 2001

[Jun 11, 2021] Filesystem Optimizations For An NVMe Based System On Latest Hardware

Notable quotes:
"... Last edited by WiseSon; 09-26-2019 at 12:17 PM. Reason: typos, spacing ..."
May 24, 2021 | www.linuxquestions.org

Registered: Feb 2009

Location: Suriname

Distribution: Slackware 12.1

Posts: 6
[ Log in to get rid of this advertisement]

Alright, you just got that fast NVMe SSD, or even a couple You hope this drive, the size of a pack of chewing gum, will feed your need for speed.

So, you install it in your system and notice that your system is noticeably more responsive; but there's something that makes you feel as though you might have missed something. What is it?

Well, for starters, most likely, the one thing most people tend to overlook, is the filesystem they choose to format their new NVMe SSD with. Two of the most popular filesystems on Linux are "The Fourth Extended Filesystem" or as it is also known: "ext4", and XFS, which is a 64-bit journalling file system created by Silicon Graphics, Inc

EXT4 and XFS are robust, journalling filesystems, and very well known and supported in the Linux world. They are also given as options for formatting hard drives, during installation of the various Linux distributions. But what is not as well known, is that EXT4 and XFS, like most other filesystems, were never intended to be used on anything other than spinning hard drives.

There was no NAND flash-based media when they were developed.

Granted, it works fine, can be grown or shrunk, depending on the needs of the user or system administrator. Why then, you ask, did I even write this tutorial? I'll tell you.
SSDs and NVMe PCIe drives, are flash-based; and do things quite a bit differently than rotational hard drives. In short, the filesystem I'm about to suggest, Flash Friendly Filesystem (F2FS), has been designed from the ground up, specifically NAND flash-based SSDs, by Samsung. Static binaries start up noticeably faster on this filesystem. And in this tutorial, I will be using a pair of Samsung EVO PLUS 970 NVMe SSDs which offer sequential performance at read/write speeds of up to 3,500/3,300 MB/s. And as a bonus, the 970 EVO Plus includes an AES 256-bit hardware-based encryption engine; a nice touch for those who like to encrypt their data.

Why two? Because my motherboard has two M.2 slots, while some have three or more; and because I prefer at least two drives in my setup. This means that in your case, one drive might get a Windows installation, and the other Linux, or a Virtual Machine (VM) if you're into that sort of thing.

Personally, I advise installing the '/' (root) on drive 1, and '/home' (user files) on drive 2. Or maybe you're a video editor, and the Operating System (OS) is on drive 1, and drive 2 for your video editing and rendering software, etc. Or perhaps you're a gamer, and place the OS on drive 1, and your games on drive number 2. The point is, when possible, place your OS on drive 1, and the programs you install yourself, on the second drive. And should you have three or more drives, which you can use from within your OS, even better. This will allow you to interleave commands between the drives, and as a result, you end up not noticing any slowdowns in your perceived performance of the system.

Even though NVMe drives are way faster than rotational drives, it's still more efficient to separate large programs from the OS, even though some might say it's not really necessary, because the drives are so fast with a lot of throughput. It all depends on how you will be using the system.

Anyway, let's get to the meat of this tutorial.

F2FS stands for "Flash Friendly File System", and was developed at Samsung Electronics Co., Ltd. And F2FS is also a filesystem designed to make the most of the performance capabilities of modern NAND flash-based devices. It was designed from the ground up, for that purpose.

While it is possible to use it on rotational hard drives, it would defeat the purpose; as you would not allow the filesystem to show you what it can do. It really should be used on NAND flash-based drives.
My personal layout I will describe below.

First the hardware specifications:

CPU Intel i9-9900k
MOTHERBOARD Asus TUF-Z390 PLUS GAMING
RAM 32GB DDR4

NVMe SSDs 2x Samsung 970 EVO PLUS 250GB

SATA SSDs 2x Samsung 860 EVO 1TB

SATA HDD 1x Western Digital 3TB 5400rpm

Linux OS Slackware -current 64 bit (August 30, 2019)

You will need F2FS tools: f2fs-tools-1.12.0-x86_64-1.txz (latest version at the time of this writing)

You may download it from https://slackware.pkgs.org , or any of the other repositories.

Since I desired to get the most user perceived speed out of the system, I used a combination of filesystems for the system, temp, and home partitions.

Like this:

Swap is the first partition, and also the smallest, at 8GB.

The OS root partition (/) at 32GB, is formatted with F2FS (programs start fastest with F2FS)

The temp partition (/tmp) at 40GB, is formatted with EXT4 (fastest when compiling software)

The user files partition (/home) at 150GB, is formatted with XFS (speedy for large and random files)

Here is a link with a speed test comparison between BTRFS, EXT, F2FS, XFS, on Linux:
https://www.phoronix.com/scan.php?pa...esystems&num=1

Now, one last thing: the drive space is setup under LVM (Logical Volume Management)
This makes it much simpler to shrink and or expand partition sizes on the fly.
I created the following:

1 Physical Volume (PV)
1 Volume Group (VG)
3 Logical Volumes (LVs)

So far this is fairly straightforward.
I forgot to mention that the other NVMe SSD was used for a Windows 10 installation, *before* I started with the install of Slackware Linux. This prevents Windows from overwriting the boot loader.

Now, for those unfamiliar with creating an LVM setup, I'll give generic instructions, which should work on most Linux systems. Some of the instructions are lifted from alien Bob's slackware LVM. Read all the instructions before you begin. Let's go.

To create a new Logical Volume (LV), this has to happen before you run the part of the installer, where you actually install the OS. Start by creating the partition where you will place the LVM with fdisk for BIOS, or gdisk for GPT disks. After creating the partition, then change the type to "8e", which is Linux LVM. Reboot the system, and continue with setting up the LVM.

Now I will leave the partition sizes up to you, but in this example, I will be dividing a 250GB SSD, over 2 partitions; swap, and the rest of the space for system install.
Start as follows:

1. pvcreate /dev/nvme0n1p2 (<-- the second partiton after swap)

2. vgcreate slackware /dev/nvme0n1p2 (<-- slackware is the name I chose, can be anything)

3. lvcreate -L 32GB -n root slackware

4. lvcreate -L 40GB -n temp slackware

5. lvcreate -l 100%FREE -n home slackware (this command uses all the remaining space for home)

Now you can continue with the OS installation.
Make sure you choose "/dev/slackware/root" as the "/" partition, when asked where to install to; and format it with F2FS.
Then make sure you choose "/dev/slackware/temp" to mount as the "/tmp" partition, and choose to format it with EXT4. And lastly, choose "/dev/slackware/home" to mount as the "/home" partition, and format it with XFS.

When the installer finishes, it will ask you to reboot. But select "no" and go with the option to let it drop you into a command prompt.
To boot this setup, you need to add the F2FS modules to your initrd, if using LILO, and install the F2FS tools in your OS, before you reboot.

I previously downloaded the f2fs-tools-1.12.0-x86_64-1.txz on a partition of one of the other SSDs, which I simply mounted, located the binary, and ran "installpkg f2fs-tools-1.12.0-x86_64-1.txz".

Now, chroot into the installed OS, by typing: chroot /mnt (<-- Slackware specific instructions YMMV)

Here's the command to create the initrd for kernel version 4.19.69 with modules for LVM and F2FS:
(is a single line)
mkinitrd -c -k 4.19.69 -m crc32:libcrc32c:crc32c_generic:crc32c-intel:crc32-pclmul:f2fs -f f2fs -r /dev/slackware/root -L

For a system using LILO, edit the lilo.conf file, so lilo uses the initrd, and add the following:
image = /boot/vmlinuz-generic-4.19.69
initrd = /boot/initrd.gz
root = /dev/slackware/root
label = linux
read-only

Run /sbin/lilo when you are done editing lilo.conf.

If using GRUB, after installing the F2FS tools, you need to make sure the LVM modules load before the rest of the system.
Edit /etc/default/grub, find the following line:
GRUB_PRELOAD_MODULES="... " (<-- there might be other modules already there)

And add the required modules between the quotation marks at the end.
Like so:

GRUB_PRELOAD_MODULES="... lvm f2fs"

Then run update-grub (or grub-mkconfig -o /boot/grub/grub.cfg), wait for it to complete, and you should be set.
If at this point you reboot, and you can boot into your shiny new system without error... Congratulations!
You are done.

Cheers!

For suggestions on improving this tutorial, please email to:
[email protected]
or simply reply to this thread

=========================================================================
DISCLAIMER:

If your computer explodes, any part of it gets damaged, you lose data, or Earth is destroyed as a result of you following these instructions... I am not responsible for anything other than providing you with the instructions on how to setup a system, with user-perceivable increased speed. Understand this before you go through with it.


Last edited by WiseSon; 09-26-2019 at 12:17 PM. Reason: typos, spacing
09-16-2019, 01:59 PM # 2
MensaWater

LQ Guru


Registered: May 2005

Location: Atlanta Georgia USA

Distribution: Redhat (RHEL), CentOS, Fedora, CoreOS, Debian, FreeBSD, HP-UX, Solaris, SCO

Posts: 7,831

Blog Entries: 15


Rep:

Nice write up.

One thing though - you don't mention RAID setup. While I prefer to separate OS and other filesystems as you mentioned I would not do it at the cost of RAID redundancy of the drives. Rather I'd put both drives into a RAID1 setup (NVME hardware or meta disk) then use that as the PV for the VG then create separate LVs for the OS and other filesystems. Of course I'd lose space availability of the equivalent of one drive but wouldn't have to worry about one drive going down blowing me out of the water as I would in a non-RAID config.

We made the mistake of using a single SSD card in a system because it claimed to do "internal redundancy" by duplicating data from one memory spot to another. It bit us when the controller on the SSD card itself died - that particular item was not redundant so we lost the drive and the data on it.

2 members found this post helpful.
09-17-2019, 12:49 PM # 3
WiseSon

LQ Newbie


Registered: Feb 2009

Location: Suriname

Distribution: Slackware 12.1

Posts: 6


Original Poster

Rep:
Hi,

You are right.
An LVM in a RAID SETUP, RAID 1 at least, would provide the minimum required redundancy.
Though I must confess, I have never setup a RAID array before. However, since most of the new motherboards come with a halfway decent raid chip, and the UEFI includes automated RAID array creation, it has become rather easy.

While I familiarize myself with the RAID documentation, I will also add something that is of use for those with UEFI/GPT setups: I successfully installed Slackware, GRUB on the GPT initialized NVMe SSD. It turned out to be very simple as well. I'll edit the instructions to include the updated information.

Thanks for the heads up

Cheers!

09-19-2019, 04:41 AM # 4
rogan

Member


Registered: Aug 2004

Distribution: Slackware

Posts: 197


Rep:

Exellent work.

I would not draw too many conclusions based on phoronix benchmarks though. File system performance and usability
wary wildly between every kernel release (even "maintenance" releases). The best is always to do some simple
tests yourself, with the software you intend to use, and for your use case.

While designing a file system specifically with the underlying media in thought might seem like a good idea, I
can't help but to wonder if the manufacturers of these devices thought: "Someone will probably come up with a
file system that actually works well with these devices, one day" or if they are well suited for the file systems
that were available at the time of their introduction.

Anyways; f2fs is (as of 5.2.15 on current) not really up to par in my tests. Heres an exerpt from one of my
benchmark logs:

Benchmarking wd 250G ssd on 5.2.15, 32G AMD 9590 sata, current of 10sept 2019.
All tests are done on newly formatted and trimmed (ssd) media. File systems
were loaded with hardware accelerated routines where applicable.
Hot cache copying ~50G (a few ftp archives and ~30 kernel source trees)
from ssd (ext4) to ssd. Reads were never close to exhaustion in any of these.
Measured time is return time, variance is around 10 sec. Actual time to
unmount readiness is ~20 +seconds for xfs, a little bit less for the others.

mkfs.btrfs -L d5 -m single -d single /dev/sde1:

root@trooper~# time cp -r /usr/local/src/system /mnt/d5/

real 5m34.001s
user 0m4.620s
sys 1m29.392s
root@trooper~# time rm -rf /mnt/d5/system

real 0m42.177s
user 0m0.797s
sys 0m40.454s

mkfs.ext4 -L d5 -O 64bit -E lazy_itable_init=0,lazy_journal_init=0 /dev/sde1:

root@trooper~# time cp -r /usr/local/src/system /mnt/d5/

real 5m10.676s
user 0m4.432s
sys 1m26.711s
root@trooper~# time rm -rf /mnt/d5/system

real 0m27.748s
user 0m0.912s
sys 0m25.331s

mkfs.xfs -L d5 /dev/sde1:

root@trooper~# time cp -r /usr/local/src/system /mnt/d5/

real 4m50.607s
user 0m4.642s
sys 1m22.464s
root@trooper~# time rm -rf /mnt/d5/system

real 1m5.594s
user 0m0.915s
sys 0m38.763s

mkfs.f2fs -l d5 /dev/sde1:

root@trooper~# time cp -r /usr/local/src/system /mnt/d5/

real 7m25.367s
user 0m4.171s
sys 1m22.977s
root@trooper~# time rm -rf /mnt/d5/system

real 1m1.034s
user 0m0.846s
sys 0m26.094s

While test installing "current" systems on f2fs root I've also had some nasty surprises:
#1 mkinitrd does not include dependencies for f2fs (crc32c) when you build an initrd.
#2 fsck while booting on f2fs always claim corruption.

09-19-2019, 05:30 AM # 5
syg00

LQ Veteran


Registered: Aug 2003

Location: Australia

Distribution: Lots ...

Posts: 19,645

Rep:

Many years ago - well before the turn of the century we had a very bad time with an early log structured enterprise SAN (mainframe). I've always been leery about them ever since - especially the garbage collection.
And given the changes constantly incorporated for flash support into the tradition filesystems, I see no requirement in normal operation for f2fs.

Each to their own though, and good to see documentation efforts like this.

[Jun 08, 2021] OpenZFS 2.0 release unifies Linux, BSD and adds tons of new features

May 24, 2021 | arstechnica.com

This Monday, ZFS on Linux lead developer Brian Behlendorf published the OpenZFS 2.0.0 release to GitHub. Along with quite a lot of new features, the announcement brings an end to the former distinction between "ZFS on Linux" and ZFS elsewhere (for example, on FreeBSD). This move has been a long time coming -- the FreeBSD community laid out its side of the roadmap two years ago -- but this is the release that makes it official.

Availability

The new OpenZFS 2.0.0 release is already available on FreeBSD, where it can be installed from ports (overriding the base system ZFS) on FreeBSD 12 systems and will be the base FreeBSD version in the upcoming FreeBSD 13. On Linux, the situation is a bit more uncertain and depends largely on the Linux distro in play.

Users of Linux distributions that use DKMS-built OpenZFS kernel modules will tend to get the new release rather quickly. Users of the better-supported but slower-moving Ubuntu probably won't see OpenZFS 2.0.0 until Ubuntu 21.10, nearly a year from now. For Ubuntu users who are willing to live on the edge, the popular but third-party and individually maintained jonathonf PPA might make it available considerably sooner.

OpenZFS 2.0.0 modules can be built from source for Linux kernels from 3.10-5.9 -- but most users should stick to getting prebuilt modules from distributions or well-established developers. "Far beyond the beaten trail" is not a phrase one should generally apply to the file system that holds one's precious data!

Advertisement New features Sequential resilver

Rebuilding degraded arrays in ZFS has historically been very different from conventional RAID. On nearly empty arrays, the ZFS rebuild -- known as "resilvering" -- was much faster because ZFS only needs to touch the used portion of the disk rather than cloning each sector across the entire drive. But this process involved an abundance of random I/O -- so on more nearly full arrays, conventional RAID's more pedestrian block-by-block whole-disk rebuild went much faster. With sequential resilvering, ZFS gets the best of both worlds: largely sequential access while still skipping unused portions of the disk(s) involved.

Persistent L2ARC

One of ZFS' most compelling features is its advanced read cache, known as the ARC. Systems with very large, very hot working sets can also implement an SSD-based read cache called L2ARC, which populates itself from blocks in the ARC nearing eviction. Historically, one of the biggest issues with L2ARC is that although the underlying SSD is persistent, the L2ARC itself is not -- it becomes empty on each reboot (or export and import of the pool). This new feature allows data in the L2ARC to remain available and viable between pool import/export cycles (including system reboots), greatly increasing the potential value of the L2ARC device.

Zstd compression algorithm

OpenZFS offers transparent inline compression, controllable at per-data-set granularity. Traditionally, the algorithm most commonly used has been lz4, a streaming algorithm offering relatively poor compress ratio but very light CPU loading. OpenZFS 2.0.0 brings support for zstd -- an algorithm designed by Yann Collet (the author of lz4) which aims to provide compression similar to gzip, with CPU load similar to lz4.

The x-axis is transfer speed, the y-axis is compression ratio. Look for ZSTD1-19 on the upper left in dark blue, ZSTD-FAST on the lower right in light blue, and LZ4 as a single orangish dot to the right of ZSTD-FAST.

Kjeld Schouten-Lebbing

Zstd is the dark blue cluster on the upper left, zstd-fast is the light blue line running across the bottom of the graph, and LZ4 is the single orangish dot a bit to the right of zstd-fast's upright segment.

Kjeld Schouten-Lebbing Previous Slide Next Slide

These graphs are a bit difficult to follow -- but essentially, they show zstd achieving its goals. During compression (disk writes), zstd-2 is more efficient than even gzip-9 while maintaining high throughput.

Compared to lz4, zstd-2 achieves 50 percent higher compression in return for a 30 percent throughput penalty. On the decompression (disk read) side, the throughput penalty is slightly higher, at around 36 percent.

Keep in mind, the throughput "penalties" described assume negligible bottlenecking on the storage medium itself. In practice, most CPUs can run rings around most storage media (even relatively slow CPUs and fast SSDs). ZFS users are broadly accustomed to seeing lz4 compression accelerate workloads in the real world, not slow them down!

Redacted replication

This one's a bit of a brain-breaker. Let's say there are portions of your data that you don't want to back up using ZFS replication. First, you clone the data set. Next, you delete the sensitive data from the clone. Then, you create a bookmark on the parent data set, which marks the blocks which changed from the parent to the clone. Finally, you can send the parent data set to its backup target, including the --redact redaction_bookmark argument -- and this replicates the non sensitive blocks only to the backup target.

Additional improvements and changes

In addition to the major features outlined above, OpenZFS 2.0.0 brings fallocate support; improved and reorganized man pages; higher performance for zfs destroy , zfs send , and zfs receive; more efficient memory management; and optimized encryption performance. Meanwhile, some infrequently used features -- deduplicated send streams, dedupditto blocks, and the zfs_vdev_scheduler module option -- have all been deprecated.

For a full list of changes, please see the original release announcement on GitHub at https://github.com/openzfs/zfs/releases/tag/zfs-2.0.0 .

[Jul 14, 2020] What exactly is RESTful programming

Stack Overflow
hasen ,
What exactly is RESTful programming? What exactly is RESTful programming?
kushalvm ,
see also the answer at the following link stackoverflow.com/a/37683965/3762855 Ciro Corvino Jun 7 '16 at 19:59
Shirgill Farhan , 2015-04-15 11:26:17
An architectural style called An architectural style called REST (Representational State Transfer) advocates that web applications should use HTTP as it was originally envisioned . Lookups should use GET requests. PUT , POST , and DELETE requests should be used for mutation, creation, and deletion respectively . REST proponents tend to favor URLs, such as REST proponents tend to favor URLs, such as REST proponents tend to favor URLs, such as
http://myserver.com/catalog/item/1729
but the REST architecture does not require these "pretty URLs". A GET request with a parameter but the REST architecture does not require these "pretty URLs". A GET request with a parameter
http://myserver.com/catalog?item=1729
is every bit as RESTful. Keep in mind that GET requests should never be used for updating information. For example, a GET request for adding an item to a cart is every bit as RESTful. Keep in mind that GET requests should never be used for updating information. For example, a GET request for adding an item to a cart Keep in mind that GET requests should never be used for updating information. For example, a GET request for adding an item to a cart Keep in mind that GET requests should never be used for updating information. For example, a GET request for adding an item to a cart
http://myserver.com/addToCart?cart=314159&item=1729
would not be appropriate. GET requests should be would not be appropriate. GET requests should be idempotent . That is, issuing a request twice should be no different from issuing it once. That's what makes the requests cacheable. An "add to cart" request is not idempotent -- issuing it twice adds two copies of the item to the cart. A POST request is clearly appropriate in this context. Thus, even a RESTful web application needs its share of POST requests. This is taken from the excellent book This is taken from the excellent book This is taken from the excellent book Core JavaServer faces book by David M. Geary.
HoCo_ ,
Lisiting Available Idempotent Operations: GET(Safe), PUT & DELETE (Exception is mentioned in this link restapitutorial.com/lessons/idempotency.html). Additional Reference for Safe & Idempotent Methods w3.org/Protocols/rfc2616/rfc2616-sec9.html � Abhijeet Jul 21 '15 at 4:00
22 revs, 15 users 88% , 2019-04-10 15:31:47
REST is the underlying architectural principle of the web. The amazing thing about the web is the fact that clients (browsers) and servers can interact in complex ways without the client knowing anything beforehand about the server and the resources it hosts. The key constraint is that the server and client must both agree on the media used, which in the case of the web is HTML . An API that adheres to the principles of An API that adheres to the principles of An API that adheres to the principles of REST does not require the client to know anything about the structure of the API. Rather, the server needs to provide whatever information the client needs to interact with the service. An HTML form is an example of this: The server specifies the location of the resource and the required fields. The browser doesn't know in advance where to submit the information, and it doesn't know in advance what information to submit. Both forms of information are entirely supplied by the server. (This principle is called HATEOAS : Hypermedia As The Engine Of Application State .) So, how does this apply to So, how does this apply to So, how does this apply to HTTP , and how can it be implemented in practice? HTTP is oriented around verbs and resources. The two verbs in mainstream usage are GET and POST , which I think everyone will recognize. However, the HTTP standard defines several others such as PUT and DELETE . These verbs are then applied to resources, according to the instructions provided by the server. For example, Let's imagine that we have a user database that is managed by a web service. Our service uses a custom hypermedia based on JSON, for which we assign the mimetype For example, Let's imagine that we have a user database that is managed by a web service. Our service uses a custom hypermedia based on JSON, for which we assign the mimetype For example, Let's imagine that we have a user database that is managed by a web service. Our service uses a custom hypermedia based on JSON, for which we assign the mimetype application/json+userdb (There might also be an application/xml+userdb and application/whatever+userdb - many media types may be supported). The client and the server have both been programmed to understand this format, but they don't know anything about each other. As Roy Fielding points out:
A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types. A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types.
A request for the base resource A request for the base resource / might return something like this: Request Request Request
GET /
Accept: application/json+userdb
Response Response
200 OK
Content-Type: application/json+userdb

{
    "version": "1.0",
    "links": [
        {
            "href": "/user",
            "rel": "list",
            "method": "GET"
        },
        {
            "href": "/user",
            "rel": "create",
            "method": "POST"
        }
    ]
}
We know from the description of our media that we can find information about related resources from sections called "links". This is called We know from the description of our media that we can find information about related resources from sections called "links". This is called Hypermedia controls . In this case, we can tell from such a section that we can find a user list by making another request for /user : Request Request Request
GET /user
Accept: application/json+userdb
Response Response
200 OK
Content-Type: application/json+userdb

{
    "users": [
        {
            "id": 1,
            "name": "Emil",
            "country: "Sweden",
            "links": [
                {
                    "href": "/user/1",
                    "rel": "self",
                    "method": "GET"
                },
                {
                    "href": "/user/1",
                    "rel": "edit",
                    "method": "PUT"
                },
                {
                    "href": "/user/1",
                    "rel": "delete",
                    "method": "DELETE"
                }
            ]
        },
        {
            "id": 2,
            "name": "Adam",
            "country: "Scotland",
            "links": [
                {
                    "href": "/user/2",
                    "rel": "self",
                    "method": "GET"
                },
                {
                    "href": "/user/2",
                    "rel": "edit",
                    "method": "PUT"
                },
                {
                    "href": "/user/2",
                    "rel": "delete",
                    "method": "DELETE"
                }
            ]
        }
    ],
    "links": [
        {
            "href": "/user",
            "rel": "create",
            "method": "POST"
        }
    ]
}
We can tell a lot from this response. For instance, we now know we can create a new user by We can tell a lot from this response. For instance, we now know we can create a new user by POST ing to /user : Request Request Request
POST /user
Accept: application/json+userdb
Content-Type: application/json+userdb

{
    "name": "Karl",
    "country": "Austria"
}
Response Response
201 Created
Content-Type: application/json+userdb

{
    "user": {
        "id": 3,
        "name": "Karl",
        "country": "Austria",
        "links": [
            {
                "href": "/user/3",
                "rel": "self",
                "method": "GET"
            },
            {
                "href": "/user/3",
                "rel": "edit",
                "method": "PUT"
            },
            {
                "href": "/user/3",
                "rel": "delete",
                "method": "DELETE"
            }
        ]
    },
    "links": {
       "href": "/user",
       "rel": "list",
       "method": "GET"
    }
}
We also know that we can change existing data: Request We also know that we can change existing data: Request Request Request
PUT /user/1
Accept: application/json+userdb
Content-Type: application/json+userdb

{
    "name": "Emil",
    "country": "Bhutan"
}
Response Response
200 OK
Content-Type: application/json+userdb

{
    "user": {
        "id": 1,
        "name": "Emil",
        "country": "Bhutan",
        "links": [
            {
                "href": "/user/1",
                "rel": "self",
                "method": "GET"
            },
            {
                "href": "/user/1",
                "rel": "edit",
                "method": "PUT"
            },
            {
                "href": "/user/1",
                "rel": "delete",
                "method": "DELETE"
            }
        ]
    },
    "links": {
       "href": "/user",
       "rel": "list",
       "method": "GET"
    }
}
Notice that we are using different HTTP verbs ( Notice that we are using different HTTP verbs ( GET , PUT , POST , DELETE etc.) to manipulate these resources, and that the only knowledge we presume on the client's part is our media definition. Further reading: Further reading: Further reading: (This answer has been the subject of a fair amount of criticism for missing the point. For the most part, that has been a fair critique. What I originally described was more in line with how REST was usually implemented a few years ago when I first wrote this, rather than its true meaning. I've revised the answer to better represent the real meaning.) (This answer has been the subject of a fair amount of criticism for missing the point. For the most part, that has been a fair critique. What I originally described was more in line with how REST was usually implemented a few years ago when I first wrote this, rather than its true meaning. I've revised the answer to better represent the real meaning.)
T.R. ,
No. REST didn't just pop up as another buzzword. It came about as a means of describing an alternative to SOAP-based data exchange. The term REST helps frame the discussion about how to transfer and access data. � tvanfosson Mar 22 '09 at 15:11
D.Shawley , 2009-03-22 19:37:57
RESTful programming is about: RESTful programming is about: The last one is probably the most important in terms of consequences and overall effectiveness of REST. Overall, most of the RESTful discussions seem to center on HTTP and its usage from a browser and what not. I understand that R. Fielding coined the term when he described the architecture and decisions that lead to HTTP. His thesis is more about the architecture and cache-ability of resources than it is about HTTP. If you are really interested in what a RESTful architecture is and why it works, read The last one is probably the most important in terms of consequences and overall effectiveness of REST. Overall, most of the RESTful discussions seem to center on HTTP and its usage from a browser and what not. I understand that R. Fielding coined the term when he described the architecture and decisions that lead to HTTP. His thesis is more about the architecture and cache-ability of resources than it is about HTTP. If you are really interested in what a RESTful architecture is and why it works, read If you are really interested in what a RESTful architecture is and why it works, read If you are really interested in what a RESTful architecture is and why it works, read his thesis a few times and read the whole thing not just Chapter 5! Next look into why DNS works . Read about the hierarchical organization of DNS and how referrals work. Then read and consider how DNS caching works. Finally, read the HTTP specifications ( RFC2616 and RFC3040 in particular) and consider how and why the caching works the way that it does. Eventually, it will just click. The final revelation for me was when I saw the similarity between DNS and HTTP. After this, understanding why SOA and Message Passing Interfaces are scalable starts to click. I think that the most important trick to understanding the architectural importance and performance implications of a RESTful and I think that the most important trick to understanding the architectural importance and performance implications of a RESTful and I think that the most important trick to understanding the architectural importance and performance implications of a RESTful and Shared Nothing architectures is to avoid getting hung up on the technology and implementation details. Concentrate on who owns resources, who is responsible for creating/maintaining them, etc. Then think about the representations, protocols, and technologies.
Philip Couling ,
An answer providing a reading list is very appropriate for this question. � ellisbben Feb 1 '12 at 19:50
pbreitenbach ,
This is what it might look like. Create a user with three properties: This is what it might look like. Create a user with three properties: Create a user with three properties: Create a user with three properties:
POST /user
fname=John&lname=Doe&age=25
The server responds: The server responds:
200 OK
Location: /user/123
In the future, you can then retrieve the user information: In the future, you can then retrieve the user information:
GET /user/123
The server responds: The server responds:
200 OK
<fname>John</fname><lname>Doe</lname><age>25</age>
To modify the record ( To modify the record ( lname and age will remain unchanged):
PATCH /user/123
fname=Johnny
To update the record (and consequently To update the record (and consequently lname and age will be NULL):
PUT /user/123
fname=Johnny
Himanshu Ahuja ,
For me this answer captured the essence of the desired answer. Simple and pragmatic. Granted there are lots of other criteria, but the example provided is a great launch pad. � CyberFonic Feb 1 '12 at 22:09
14 revs, 4 users 94% , 2013-09-08 17:43:09
A great book on REST is A great book on REST is REST in Practice . Must reads are Must reads are Must reads are Representational State Transfer (REST) and REST APIs must be hypertext-driven See Martin Fowlers article the See Martin Fowlers article the See Martin Fowlers article the Richardson Maturity Model (RMM) for an explanation on what an RESTful service is. To be RESTful a Service needs to fulfill the To be RESTful a Service needs to fulfill the To be RESTful a Service needs to fulfill the To be RESTful a Service needs to fulfill the To be RESTful a Service needs to fulfill the Hypermedia as the Engine of Application State. (HATEOAS) , that is, it needs to reach level 3 in the RMM, read the article for details or the slides from the qcon talk .
The HATEOAS constraint is an acronym for Hypermedia as the Engine of Application State. This principle is the key differentiator between a REST and most other forms of client server system. ... A client of a RESTful application need only know a single fixed URL to access it. All future actions should be discoverable dynamically from hypermedia links included in the representations of the resources that are returned from that URL. Standardized media types are also expected to be understood by any client that might use a RESTful API. (From Wikipedia, the free encyclopedia) The HATEOAS constraint is an acronym for Hypermedia as the Engine of Application State. This principle is the key differentiator between a REST and most other forms of client server system. ... A client of a RESTful application need only know a single fixed URL to access it. All future actions should be discoverable dynamically from hypermedia links included in the representations of the resources that are returned from that URL. Standardized media types are also expected to be understood by any client that might use a RESTful API. (From Wikipedia, the free encyclopedia) ... A client of a RESTful application need only know a single fixed URL to access it. All future actions should be discoverable dynamically from hypermedia links included in the representations of the resources that are returned from that URL. Standardized media types are also expected to be understood by any client that might use a RESTful API. (From Wikipedia, the free encyclopedia) ... A client of a RESTful application need only know a single fixed URL to access it. All future actions should be discoverable dynamically from hypermedia links included in the representations of the resources that are returned from that URL. Standardized media types are also expected to be understood by any client that might use a RESTful API. (From Wikipedia, the free encyclopedia) A client of a RESTful application need only know a single fixed URL to access it. All future actions should be discoverable dynamically from hypermedia links included in the representations of the resources that are returned from that URL. Standardized media types are also expected to be understood by any client that might use a RESTful API. (From Wikipedia, the free encyclopedia) A client of a RESTful application need only know a single fixed URL to access it. All future actions should be discoverable dynamically from hypermedia links included in the representations of the resources that are returned from that URL. Standardized media types are also expected to be understood by any client that might use a RESTful API. (From Wikipedia, the free encyclopedia)
REST Litmus Test for Web Frameworks is a similar maturity test for web frameworks. Approaching pure REST: Learning to love HATEOAS is a good collection of links. REST versus SOAP for the Public Cloud discusses the current levels of REST usage. REST and versioning discusses Extensibility, Versioning, Evolvability, etc. through Modifiability
Brent Bradburn ,
I think this answer touches the key point of understanding REST: what does the word representational mean. Level 1 - Resources says about state . Level 2 - HTTP Verbs says about transfer (read change ). Level 3 - HATEOAS says driving future transfers via representation (JSON/XML/HTML returned), which means you've got known how to say the next round of talk with the information returned. So REST reads: "(representational (state transfer))", instead of "((representational state) transfer)". � lcn Dec 9 '14 at 19:49
Ravi , 2012-11-18 20:46:20
What is REST? REST stands for Representational State Transfer. (It is sometimes spelled "ReST".) It relies on a stateless, client-server, cacheable communications protocol -- and in virtually all cases, the HTTP protocol is used. REST is an architecture style for designing networked applications. The idea is that, rather than using complex mechanisms such as CORBA, RPC or SOAP to connect between machines, simple HTTP is used to make calls between machines. In many ways, the World Wide Web itself, based on HTTP, can be viewed as a REST-based architecture. RESTful applications use HTTP requests to post data (create and/or update), read data (e.g., make queries), and delete data. Thus, REST uses HTTP for all four CRUD (Create/Read/Update/Delete) operations. REST is a lightweight alternative to mechanisms like RPC (Remote Procedure Calls) and Web Services (SOAP, WSDL, et al.). Later, we will see how much more simple REST is. Despite being simple, REST is fully-featured; there's basically nothing you can do in Web Services that can't be done with a RESTful architecture. REST is not a "standard". There will never be a W3C recommendataion for REST, for example. And while there are REST programming frameworks, working with REST is so simple that you can often "roll your own" with standard library features in languages like Perl, Java, or C#. What is REST? REST stands for Representational State Transfer. (It is sometimes spelled "ReST".) It relies on a stateless, client-server, cacheable communications protocol -- and in virtually all cases, the HTTP protocol is used. REST is an architecture style for designing networked applications. The idea is that, rather than using complex mechanisms such as CORBA, RPC or SOAP to connect between machines, simple HTTP is used to make calls between machines. In many ways, the World Wide Web itself, based on HTTP, can be viewed as a REST-based architecture. RESTful applications use HTTP requests to post data (create and/or update), read data (e.g., make queries), and delete data. Thus, REST uses HTTP for all four CRUD (Create/Read/Update/Delete) operations. REST is a lightweight alternative to mechanisms like RPC (Remote Procedure Calls) and Web Services (SOAP, WSDL, et al.). Later, we will see how much more simple REST is. Despite being simple, REST is fully-featured; there's basically nothing you can do in Web Services that can't be done with a RESTful architecture. REST is not a "standard". There will never be a W3C recommendataion for REST, for example. And while there are REST programming frameworks, working with REST is so simple that you can often "roll your own" with standard library features in languages like Perl, Java, or C#. REST stands for Representational State Transfer. (It is sometimes spelled "ReST".) It relies on a stateless, client-server, cacheable communications protocol -- and in virtually all cases, the HTTP protocol is used. REST is an architecture style for designing networked applications. The idea is that, rather than using complex mechanisms such as CORBA, RPC or SOAP to connect between machines, simple HTTP is used to make calls between machines. In many ways, the World Wide Web itself, based on HTTP, can be viewed as a REST-based architecture. RESTful applications use HTTP requests to post data (create and/or update), read data (e.g., make queries), and delete data. Thus, REST uses HTTP for all four CRUD (Create/Read/Update/Delete) operations. REST is a lightweight alternative to mechanisms like RPC (Remote Procedure Calls) and Web Services (SOAP, WSDL, et al.). Later, we will see how much more simple REST is. Despite being simple, REST is fully-featured; there's basically nothing you can do in Web Services that can't be done with a RESTful architecture. REST is not a "standard". There will never be a W3C recommendataion for REST, for example. And while there are REST programming frameworks, working with REST is so simple that you can often "roll your own" with standard library features in languages like Perl, Java, or C#. REST stands for Representational State Transfer. (It is sometimes spelled "ReST".) It relies on a stateless, client-server, cacheable communications protocol -- and in virtually all cases, the HTTP protocol is used. REST is an architecture style for designing networked applications. The idea is that, rather than using complex mechanisms such as CORBA, RPC or SOAP to connect between machines, simple HTTP is used to make calls between machines. In many ways, the World Wide Web itself, based on HTTP, can be viewed as a REST-based architecture. RESTful applications use HTTP requests to post data (create and/or update), read data (e.g., make queries), and delete data. Thus, REST uses HTTP for all four CRUD (Create/Read/Update/Delete) operations. REST is a lightweight alternative to mechanisms like RPC (Remote Procedure Calls) and Web Services (SOAP, WSDL, et al.). Later, we will see how much more simple REST is. Despite being simple, REST is fully-featured; there's basically nothing you can do in Web Services that can't be done with a RESTful architecture. REST is not a "standard". There will never be a W3C recommendataion for REST, for example. And while there are REST programming frameworks, working with REST is so simple that you can often "roll your own" with standard library features in languages like Perl, Java, or C#. REST is an architecture style for designing networked applications. The idea is that, rather than using complex mechanisms such as CORBA, RPC or SOAP to connect between machines, simple HTTP is used to make calls between machines. In many ways, the World Wide Web itself, based on HTTP, can be viewed as a REST-based architecture. RESTful applications use HTTP requests to post data (create and/or update), read data (e.g., make queries), and delete data. Thus, REST uses HTTP for all four CRUD (Create/Read/Update/Delete) operations. REST is a lightweight alternative to mechanisms like RPC (Remote Procedure Calls) and Web Services (SOAP, WSDL, et al.). Later, we will see how much more simple REST is. Despite being simple, REST is fully-featured; there's basically nothing you can do in Web Services that can't be done with a RESTful architecture. REST is not a "standard". There will never be a W3C recommendataion for REST, for example. And while there are REST programming frameworks, working with REST is so simple that you can often "roll your own" with standard library features in languages like Perl, Java, or C#. REST is an architecture style for designing networked applications. The idea is that, rather than using complex mechanisms such as CORBA, RPC or SOAP to connect between machines, simple HTTP is used to make calls between machines. In many ways, the World Wide Web itself, based on HTTP, can be viewed as a REST-based architecture. RESTful applications use HTTP requests to post data (create and/or update), read data (e.g., make queries), and delete data. Thus, REST uses HTTP for all four CRUD (Create/Read/Update/Delete) operations. REST is a lightweight alternative to mechanisms like RPC (Remote Procedure Calls) and Web Services (SOAP, WSDL, et al.). Later, we will see how much more simple REST is. Despite being simple, REST is fully-featured; there's basically nothing you can do in Web Services that can't be done with a RESTful architecture. REST is not a "standard". There will never be a W3C recommendataion for REST, for example. And while there are REST programming frameworks, working with REST is so simple that you can often "roll your own" with standard library features in languages like Perl, Java, or C#. In many ways, the World Wide Web itself, based on HTTP, can be viewed as a REST-based architecture. RESTful applications use HTTP requests to post data (create and/or update), read data (e.g., make queries), and delete data. Thus, REST uses HTTP for all four CRUD (Create/Read/Update/Delete) operations. REST is a lightweight alternative to mechanisms like RPC (Remote Procedure Calls) and Web Services (SOAP, WSDL, et al.). Later, we will see how much more simple REST is. Despite being simple, REST is fully-featured; there's basically nothing you can do in Web Services that can't be done with a RESTful architecture. REST is not a "standard". There will never be a W3C recommendataion for REST, for example. And while there are REST programming frameworks, working with REST is so simple that you can often "roll your own" with standard library features in languages like Perl, Java, or C#. In many ways, the World Wide Web itself, based on HTTP, can be viewed as a REST-based architecture. RESTful applications use HTTP requests to post data (create and/or update), read data (e.g., make queries), and delete data. Thus, REST uses HTTP for all four CRUD (Create/Read/Update/Delete) operations. REST is a lightweight alternative to mechanisms like RPC (Remote Procedure Calls) and Web Services (SOAP, WSDL, et al.). Later, we will see how much more simple REST is. Despite being simple, REST is fully-featured; there's basically nothing you can do in Web Services that can't be done with a RESTful architecture. REST is not a "standard". There will never be a W3C recommendataion for REST, for example. And while there are REST programming frameworks, working with REST is so simple that you can often "roll your own" with standard library features in languages like Perl, Java, or C#. REST is a lightweight alternative to mechanisms like RPC (Remote Procedure Calls) and Web Services (SOAP, WSDL, et al.). Later, we will see how much more simple REST is. Despite being simple, REST is fully-featured; there's basically nothing you can do in Web Services that can't be done with a RESTful architecture. REST is not a "standard". There will never be a W3C recommendataion for REST, for example. And while there are REST programming frameworks, working with REST is so simple that you can often "roll your own" with standard library features in languages like Perl, Java, or C#. REST is a lightweight alternative to mechanisms like RPC (Remote Procedure Calls) and Web Services (SOAP, WSDL, et al.). Later, we will see how much more simple REST is. Despite being simple, REST is fully-featured; there's basically nothing you can do in Web Services that can't be done with a RESTful architecture. REST is not a "standard". There will never be a W3C recommendataion for REST, for example. And while there are REST programming frameworks, working with REST is so simple that you can often "roll your own" with standard library features in languages like Perl, Java, or C#. Despite being simple, REST is fully-featured; there's basically nothing you can do in Web Services that can't be done with a RESTful architecture. REST is not a "standard". There will never be a W3C recommendataion for REST, for example. And while there are REST programming frameworks, working with REST is so simple that you can often "roll your own" with standard library features in languages like Perl, Java, or C#. Despite being simple, REST is fully-featured; there's basically nothing you can do in Web Services that can't be done with a RESTful architecture. REST is not a "standard". There will never be a W3C recommendataion for REST, for example. And while there are REST programming frameworks, working with REST is so simple that you can often "roll your own" with standard library features in languages like Perl, Java, or C#.
One of the best reference I found when I try to find the simple real meaning of rest. One of the best reference I found when I try to find the simple real meaning of rest. http://rest.elkstein.org/
Chaklader Asfak Arefe ,
This is a really concise answer. Can you also describe why the REST is called stateless? � Chaklader Asfak Arefe Feb 12 '19 at 17:15
dbr ,
REST is using the various HTTP methods (mainly GET/PUT/DELETE) to manipulate data. Rather than using a specific URL to delete a method (say, REST is using the various HTTP methods (mainly GET/PUT/DELETE) to manipulate data. Rather than using a specific URL to delete a method (say, Rather than using a specific URL to delete a method (say, Rather than using a specific URL to delete a method (say, /user/123/delete ), you would send a DELETE request to the /user/[id] URL, to edit a user, to retrieve info on a user you send a GET request to /user/[id] For example, instead a set of URLs which might look like some of the following.. For example, instead a set of URLs which might look like some of the following.. For example, instead a set of URLs which might look like some of the following..
GET /delete_user.x?id=123
GET /user/delete
GET /new_user.x
GET /user/new
GET /user?id=1
GET /user/id/1
You use the HTTP "verbs" and have.. You use the HTTP "verbs" and have..
GET /user/2
DELETE /user/2
PUT /user
Spencer Ruport ,
That's "using HTTP properly", which is not the same as "restful" (although it's related to it) � Julian Reschke Mar 22 '09 at 15:56
Hank Gay ,
It's programming where the architecture of your system fits the It's programming where the architecture of your system fits the REST style laid out by Roy Fielding in his thesis . Since this is the architectural style that describes the web (more or less), lots of people are interested in it. Bonus answer: No. Unless you're studying software architecture as an academic or designing web services, there's really no reason to have heard the term. Bonus answer: No. Unless you're studying software architecture as an academic or designing web services, there's really no reason to have heard the term. Bonus answer: No. Unless you're studying software architecture as an academic or designing web services, there's really no reason to have heard the term.
moodboom ,
but not straight-forward .. makes it more complicated that it needs to be. � hasen Mar 22 '09 at 15:38
Only You , 2013-07-12 16:33:02
I would say RESTful programming would be about creating systems (API) that follow the REST architectural style. I found this fantastic, short, and easy to understand tutorial about REST by Dr. M. Elkstein and quoting the essential part that would answer your question for the most part: I would say RESTful programming would be about creating systems (API) that follow the REST architectural style. I found this fantastic, short, and easy to understand tutorial about REST by Dr. M. Elkstein and quoting the essential part that would answer your question for the most part: I found this fantastic, short, and easy to understand tutorial about REST by Dr. M. Elkstein and quoting the essential part that would answer your question for the most part: I found this fantastic, short, and easy to understand tutorial about REST by Dr. M. Elkstein and quoting the essential part that would answer your question for the most part: Learn REST: A Tutorial
REST is an REST is an architecture style for designing networked applications. The idea is that, rather than using complex mechanisms such as CORBA, RPC or SOAP to connect between machines, simple HTTP is used to make calls between machines.
  • In many ways, the World Wide Web itself, based on HTTP, can be viewed as a REST-based architecture.
RESTful applications use HTTP requests to post data (create and/or update), read data (e.g., make queries), and delete data. Thus, REST uses HTTP for all four CRUD (Create/Read/Update/Delete) operations. RESTful applications use HTTP requests to post data (create and/or update), read data (e.g., make queries), and delete data. Thus, REST uses HTTP for all four CRUD (Create/Read/Update/Delete) operations.
I don't think you should feel stupid for not hearing about REST outside Stack Overflow..., I would be in the same situation!; answers to this other SO question on I don't think you should feel stupid for not hearing about REST outside Stack Overflow..., I would be in the same situation!; answers to this other SO question on Why is REST getting big now could ease some feelings.
Only You ,
This article explains the relationship between HTTP and REST freecodecamp.org/news/ Only You Sep 21 '19 at 21:32
tompark , 2009-03-23 17:11:58
I apologize if I'm not answering the question directly, but it's easier to understand all this with more detailed examples. Fielding is not easy to understand due to all the abstraction and terminology. There's a fairly good example here: I apologize if I'm not answering the question directly, but it's easier to understand all this with more detailed examples. Fielding is not easy to understand due to all the abstraction and terminology. There's a fairly good example here: There's a fairly good example here: There's a fairly good example here: Explaining REST and Hypertext: Spam-E the Spam Cleaning Robot And even better, there's a clean explanation with simple examples here (the powerpoint is more comprehensive, but you can get most of it in the html version): And even better, there's a clean explanation with simple examples here (the powerpoint is more comprehensive, but you can get most of it in the html version): And even better, there's a clean explanation with simple examples here (the powerpoint is more comprehensive, but you can get most of it in the html version): http://www.xfront.com/REST.ppt or http://www.xfront.com/REST.html After reading the examples, I could see why Ken is saying that REST is hypertext-driven. I'm not actually sure that he's right though, because that /user/123 is a URI that points to a resource, and it's not clear to me that it's unRESTful just because the client knows about it "out-of-band." That xfront document explains the difference between REST and SOAP, and this is really helpful too. When Fielding says, " After reading the examples, I could see why Ken is saying that REST is hypertext-driven. I'm not actually sure that he's right though, because that /user/123 is a URI that points to a resource, and it's not clear to me that it's unRESTful just because the client knows about it "out-of-band." That xfront document explains the difference between REST and SOAP, and this is really helpful too. When Fielding says, " After reading the examples, I could see why Ken is saying that REST is hypertext-driven. I'm not actually sure that he's right though, because that /user/123 is a URI that points to a resource, and it's not clear to me that it's unRESTful just because the client knows about it "out-of-band." That xfront document explains the difference between REST and SOAP, and this is really helpful too. When Fielding says, " That xfront document explains the difference between REST and SOAP, and this is really helpful too. When Fielding says, " That xfront document explains the difference between REST and SOAP, and this is really helpful too. When Fielding says, " That is RPC. It screams RPC. ", it's clear that RPC is not RESTful, so it's useful to see the exact reasons for this. (SOAP is a type of RPC.)
coder_tim ,
cool links, thanks. I'm tired of these REST guys that say some example is not "REST-ful", but then refuse to say how to change the example to be REST-ful. � coder_tim Feb 1 '12 at 19:19
Suresh Gupta , 2013-07-25 09:05:19
What is REST? REST in official words, REST is an architectural style built on certain principles using the current "Web" fundamentals. There are 5 basic fundamentals of web which are leveraged to create REST services. What is REST? REST in official words, REST is an architectural style built on certain principles using the current "Web" fundamentals. There are 5 basic fundamentals of web which are leveraged to create REST services. REST in official words, REST is an architectural style built on certain principles using the current "Web" fundamentals. There are 5 basic fundamentals of web which are leveraged to create REST services. REST in official words, REST is an architectural style built on certain principles using the current "Web" fundamentals. There are 5 basic fundamentals of web which are leveraged to create REST services.
mendez7 ,
What does Communication is Done by Representation mean? � mendez7 Mar 10 '19 at 21:59
Ken , 2009-03-22 16:36:31
I see a bunch of answers that say putting everything about user 123 at resource "/user/123" is RESTful. Roy Fielding, who coined the term, says I see a bunch of answers that say putting everything about user 123 at resource "/user/123" is RESTful. Roy Fielding, who coined the term, says Roy Fielding, who coined the term, says Roy Fielding, who coined the term, says REST APIs must be hypertext-driven . In particular, "A REST API must not define fixed resource names or hierarchies". So if your "/user/123" path is hardcoded on the client, it's not really RESTful. A good use of HTTP, maybe, maybe not. But not RESTful. It has to come from hypertext. So if your "/user/123" path is hardcoded on the client, it's not really RESTful. A good use of HTTP, maybe, maybe not. But not RESTful. It has to come from hypertext. So if your "/user/123" path is hardcoded on the client, it's not really RESTful. A good use of HTTP, maybe, maybe not. But not RESTful. It has to come from hypertext.
MSalters ,
so .... how would that example be restful? how would you change the url to make it restful? � hasen Mar 22 '09 at 16:49
inf3rno , 2013-11-22 22:49:13
The answer is very simple, there is a The answer is very simple, there is a dissertation written by Roy Fielding.] 1 In that dissertation he defines the REST principles. If an application fulfills all of those principles, then that is a REST application. The term RESTful was created because ppl exhausted the word REST by calling their non-REST application as REST. After that the term RESTful was exhausted as well. Nowadays we are talking about Web APIs and Hypermedia APIs , because the most of the so called REST applications did not fulfill the HATEOAS part of the uniform interface constraint. The REST constraints are the following: The REST constraints are the following: The REST constraints are the following:
  1. client-server architecture So it does not work with for example PUB/SUB sockets, it is based on REQ/REP. client-server architecture So it does not work with for example PUB/SUB sockets, it is based on REQ/REP. So it does not work with for example PUB/SUB sockets, it is based on REQ/REP. So it does not work with for example PUB/SUB sockets, it is based on REQ/REP.
  2. stateless communication So the server does not maintain the states of the clients. This means that you cannot use server a side session storage and you have to authenticate every request. Your clients possibly send basic auth headers through an encrypted connection. (By large applications it is hard to maintain many sessions.) stateless communication So the server does not maintain the states of the clients. This means that you cannot use server a side session storage and you have to authenticate every request. Your clients possibly send basic auth headers through an encrypted connection. (By large applications it is hard to maintain many sessions.) So the server does not maintain the states of the clients. This means that you cannot use server a side session storage and you have to authenticate every request. Your clients possibly send basic auth headers through an encrypted connection. (By large applications it is hard to maintain many sessions.) So the server does not maintain the states of the clients. This means that you cannot use server a side session storage and you have to authenticate every request. Your clients possibly send basic auth headers through an encrypted connection. (By large applications it is hard to maintain many sessions.)
  3. usage of cache if you can So you don't have to serve the same requests again and again. usage of cache if you can So you don't have to serve the same requests again and again. So you don't have to serve the same requests again and again. So you don't have to serve the same requests again and again.
  4. uniform interface as common contract between client and server The contract between the client and the server is not maintained by the server. In other words the client must be decoupled from the implementation of the service. You can reach this state by using standard solutions, like the IRI (URI) standard to identify resources, the HTTP standard to exchange messages, standard MIME types to describe the body serialization format, metadata (possibly RDF vocabs, microformats, etc.) to describe the semantics of different parts of the message body. To decouple the IRI structure from the client, you have to send hyperlinks to the clients in hypermedia formats like (HTML, JSON-LD, HAL, etc.). So a client can use the metadata (possibly link relations, RDF vocabs) assigned to the hyperlinks to navigate the state machine of the application through the proper state transitions in order to achieve its current goal. For example when a client wants to send an order to a webshop, then it have to check the hyperlinks in the responses sent by the webshop. By checking the links it founds one described with the uniform interface as common contract between client and server The contract between the client and the server is not maintained by the server. In other words the client must be decoupled from the implementation of the service. You can reach this state by using standard solutions, like the IRI (URI) standard to identify resources, the HTTP standard to exchange messages, standard MIME types to describe the body serialization format, metadata (possibly RDF vocabs, microformats, etc.) to describe the semantics of different parts of the message body. To decouple the IRI structure from the client, you have to send hyperlinks to the clients in hypermedia formats like (HTML, JSON-LD, HAL, etc.). So a client can use the metadata (possibly link relations, RDF vocabs) assigned to the hyperlinks to navigate the state machine of the application through the proper state transitions in order to achieve its current goal. For example when a client wants to send an order to a webshop, then it have to check the hyperlinks in the responses sent by the webshop. By checking the links it founds one described with the The contract between the client and the server is not maintained by the server. In other words the client must be decoupled from the implementation of the service. You can reach this state by using standard solutions, like the IRI (URI) standard to identify resources, the HTTP standard to exchange messages, standard MIME types to describe the body serialization format, metadata (possibly RDF vocabs, microformats, etc.) to describe the semantics of different parts of the message body. To decouple the IRI structure from the client, you have to send hyperlinks to the clients in hypermedia formats like (HTML, JSON-LD, HAL, etc.). So a client can use the metadata (possibly link relations, RDF vocabs) assigned to the hyperlinks to navigate the state machine of the application through the proper state transitions in order to achieve its current goal. For example when a client wants to send an order to a webshop, then it have to check the hyperlinks in the responses sent by the webshop. By checking the links it founds one described with the The contract between the client and the server is not maintained by the server. In other words the client must be decoupled from the implementation of the service. You can reach this state by using standard solutions, like the IRI (URI) standard to identify resources, the HTTP standard to exchange messages, standard MIME types to describe the body serialization format, metadata (possibly RDF vocabs, microformats, etc.) to describe the semantics of different parts of the message body. To decouple the IRI structure from the client, you have to send hyperlinks to the clients in hypermedia formats like (HTML, JSON-LD, HAL, etc.). So a client can use the metadata (possibly link relations, RDF vocabs) assigned to the hyperlinks to navigate the state machine of the application through the proper state transitions in order to achieve its current goal. For example when a client wants to send an order to a webshop, then it have to check the hyperlinks in the responses sent by the webshop. By checking the links it founds one described with the For example when a client wants to send an order to a webshop, then it have to check the hyperlinks in the responses sent by the webshop. By checking the links it founds one described with the For example when a client wants to send an order to a webshop, then it have to check the hyperlinks in the responses sent by the webshop. By checking the links it founds one described with the http://schema.org/OrderAction . The client know the schema.org vocab, so it understands that by activating this hyperlink it will send the order. So it activates the hyperlink and sends a POST https://example.com/api/v1/order message with the proper body. After that the service processes the message and responds with the result having the proper HTTP status header, for example 201 - created by success. To annotate messages with detailed metadata the standard solution to use an RDF format, for example JSON-LD with a REST vocab, for example Hydra and domain specific vocabs like schema.org or any other linked data vocab and maybe a custom application specific vocab if needed. Now this is not easy, that's why most ppl use HAL and other simple formats which usually provide only a REST vocab, but no linked data support.
  5. build a layered system to increase scalability The REST system is composed of hierarchical layers. Each layer contains components which use the services of components which are in the next layer below. So you can add new layers and components effortless. For example there is a client layer which contains the clients and below that there is a service layer which contains a single service. Now you can add a client side cache between them. After that you can add another service instance and a load balancer, and so on... The client code and the service code won't change. build a layered system to increase scalability The REST system is composed of hierarchical layers. Each layer contains components which use the services of components which are in the next layer below. So you can add new layers and components effortless. For example there is a client layer which contains the clients and below that there is a service layer which contains a single service. Now you can add a client side cache between them. After that you can add another service instance and a load balancer, and so on... The client code and the service code won't change. The REST system is composed of hierarchical layers. Each layer contains components which use the services of components which are in the next layer below. So you can add new layers and components effortless. For example there is a client layer which contains the clients and below that there is a service layer which contains a single service. Now you can add a client side cache between them. After that you can add another service instance and a load balancer, and so on... The client code and the service code won't change. The REST system is composed of hierarchical layers. Each layer contains components which use the services of components which are in the next layer below. So you can add new layers and components effortless. For example there is a client layer which contains the clients and below that there is a service layer which contains a single service. Now you can add a client side cache between them. After that you can add another service instance and a load balancer, and so on... The client code and the service code won't change. For example there is a client layer which contains the clients and below that there is a service layer which contains a single service. Now you can add a client side cache between them. After that you can add another service instance and a load balancer, and so on... The client code and the service code won't change. For example there is a client layer which contains the clients and below that there is a service layer which contains a single service. Now you can add a client side cache between them. After that you can add another service instance and a load balancer, and so on... The client code and the service code won't change.
  6. code on demand to extend client functionality This constraint is optional. For example you can send a parser for a specific media type to the client, and so on... In order to do this you might need a standard plugin loader system in the client, or your client will be coupled to the plugin loader solution. code on demand to extend client functionality This constraint is optional. For example you can send a parser for a specific media type to the client, and so on... In order to do this you might need a standard plugin loader system in the client, or your client will be coupled to the plugin loader solution. This constraint is optional. For example you can send a parser for a specific media type to the client, and so on... In order to do this you might need a standard plugin loader system in the client, or your client will be coupled to the plugin loader solution. This constraint is optional. For example you can send a parser for a specific media type to the client, and so on... In order to do this you might need a standard plugin loader system in the client, or your client will be coupled to the plugin loader solution.
REST constraints result a highly scalable system in where the clients are decoupled from the implementations of the services. So the clients can be reusable, general just like the browsers on the web. The clients and the services share the same standards and vocabs, so they can understand each other despite the fact that the client does not know the implementation details of the service. This makes possible to create automated clients which can find and utilize REST services to achieve their goals. In long term these clients can communicate to each other and trust each other with tasks, just like humans do. If we add learning patterns to such clients, then the result will be one or more AI using the web of machines instead of a single server park. So at the end the dream of Berners Lee: the semantic web and the artificial intelligence will be reality. So in 2030 we end up terminated by the Skynet. Until then ... ;-) REST constraints result a highly scalable system in where the clients are decoupled from the implementations of the services. So the clients can be reusable, general just like the browsers on the web. The clients and the services share the same standards and vocabs, so they can understand each other despite the fact that the client does not know the implementation details of the service. This makes possible to create automated clients which can find and utilize REST services to achieve their goals. In long term these clients can communicate to each other and trust each other with tasks, just like humans do. If we add learning patterns to such clients, then the result will be one or more AI using the web of machines instead of a single server park. So at the end the dream of Berners Lee: the semantic web and the artificial intelligence will be reality. So in 2030 we end up terminated by the Skynet. Until then ... ;-)
> ,
add a comment
kenorb , 2014-06-15 19:02:17
RESTful (Representational state transfer) API programming is writing web applications in any programming language by following 5 basic software architectural style principles:
  1. Resource (data, information).
  2. Unique global identifier (all resources are unique identified by URI ).
  3. Uniform interface - use simple and standard interface (HTTP).
  4. Representation - all communication is done by representation (e.g. XML / JSON )
  5. Stateless (every request happens in complete isolation, it's easier to cache and load-balance),
In other words you're writing simple point-to-point network applications over HTTP which uses verbs such as GET, POST, PUT or DELETE by implementing RESTful architecture which proposes standardization of the interface each "resource" exposes. It is nothing that using current features of the web in a simple and effective way (highly successful, proven and distributed architecture). It is an alternative to more complex mechanisms like In other words you're writing simple point-to-point network applications over HTTP which uses verbs such as GET, POST, PUT or DELETE by implementing RESTful architecture which proposes standardization of the interface each "resource" exposes. It is nothing that using current features of the web in a simple and effective way (highly successful, proven and distributed architecture). It is an alternative to more complex mechanisms like SOAP , CORBA and RPC . RESTful programming conforms to Web architecture design and, if properly implemented, it allows you to take the full advantage of scalable Web infrastructure. RESTful programming conforms to Web architecture design and, if properly implemented, it allows you to take the full advantage of scalable Web infrastructure. RESTful programming conforms to Web architecture design and, if properly implemented, it allows you to take the full advantage of scalable Web infrastructure.
Nathan Andelin ,
If I had to reduce the original dissertation on REST to just 3 short sentences, I think the following captures its essence: If I had to reduce the original dissertation on REST to just 3 short sentences, I think the following captures its essence:
  1. Resources are requested via URLs.
  2. Protocols are limited to what you can communicate by using URLs.
  3. Metadata is passed as name-value pairs (post data and query string parameters).
After that, it's easy to fall into debates about adaptations, coding conventions, and best practices. Interestingly, there is no mention of HTTP POST, GET, DELETE, or PUT operations in the dissertation. That must be someone's later interpretation of a "best practice" for a "uniform interface". When it comes to web services, it seems that we need some way of distinguishing WSDL and SOAP based architectures which add considerable overhead and arguably much unnecessary complexity to the interface. They also require additional frameworks and developer tools in order to implement. I'm not sure if REST is the best term to distinguish between common-sense interfaces and overly engineered interfaces such as WSDL and SOAP. But we need something. After that, it's easy to fall into debates about adaptations, coding conventions, and best practices. Interestingly, there is no mention of HTTP POST, GET, DELETE, or PUT operations in the dissertation. That must be someone's later interpretation of a "best practice" for a "uniform interface". When it comes to web services, it seems that we need some way of distinguishing WSDL and SOAP based architectures which add considerable overhead and arguably much unnecessary complexity to the interface. They also require additional frameworks and developer tools in order to implement. I'm not sure if REST is the best term to distinguish between common-sense interfaces and overly engineered interfaces such as WSDL and SOAP. But we need something. Interestingly, there is no mention of HTTP POST, GET, DELETE, or PUT operations in the dissertation. That must be someone's later interpretation of a "best practice" for a "uniform interface". When it comes to web services, it seems that we need some way of distinguishing WSDL and SOAP based architectures which add considerable overhead and arguably much unnecessary complexity to the interface. They also require additional frameworks and developer tools in order to implement. I'm not sure if REST is the best term to distinguish between common-sense interfaces and overly engineered interfaces such as WSDL and SOAP. But we need something. Interestingly, there is no mention of HTTP POST, GET, DELETE, or PUT operations in the dissertation. That must be someone's later interpretation of a "best practice" for a "uniform interface". When it comes to web services, it seems that we need some way of distinguishing WSDL and SOAP based architectures which add considerable overhead and arguably much unnecessary complexity to the interface. They also require additional frameworks and developer tools in order to implement. I'm not sure if REST is the best term to distinguish between common-sense interfaces and overly engineered interfaces such as WSDL and SOAP. But we need something. When it comes to web services, it seems that we need some way of distinguishing WSDL and SOAP based architectures which add considerable overhead and arguably much unnecessary complexity to the interface. They also require additional frameworks and developer tools in order to implement. I'm not sure if REST is the best term to distinguish between common-sense interfaces and overly engineered interfaces such as WSDL and SOAP. But we need something. When it comes to web services, it seems that we need some way of distinguishing WSDL and SOAP based architectures which add considerable overhead and arguably much unnecessary complexity to the interface. They also require additional frameworks and developer tools in order to implement. I'm not sure if REST is the best term to distinguish between common-sense interfaces and overly engineered interfaces such as WSDL and SOAP. But we need something.
suing , 2012-02-01 21:20:21
REST is an architectural pattern and style of writing distributed applications. It is not a programming style in the narrow sense. Saying you use the REST style is similar to saying that you built a house in a particular style: for example Tudor or Victorian. Both REST as an software style and Tudor or Victorian as a home style can be defined by the qualities and constraints that make them up. For example REST must have Client Server separation where messages are self-describing. Tudor style homes have Overlapping gables and Roofs that are steeply pitched with front facing gables. You can read Roy's dissertation to learn more about the constraints and qualities that make up REST. REST unlike home styles has had a tough time being consistently and practically applied. This may have been intentional. Leaving its actual implementation up to the designer. So you are free to do what you want so as long as you meet the constraints set out in the dissertation you are creating REST Systems. Bonus: The entire web is based on REST (or REST was based on the web). Therefore as a web developer you might want aware of that although it's not necessary to write good web apps. REST is an architectural pattern and style of writing distributed applications. It is not a programming style in the narrow sense. Saying you use the REST style is similar to saying that you built a house in a particular style: for example Tudor or Victorian. Both REST as an software style and Tudor or Victorian as a home style can be defined by the qualities and constraints that make them up. For example REST must have Client Server separation where messages are self-describing. Tudor style homes have Overlapping gables and Roofs that are steeply pitched with front facing gables. You can read Roy's dissertation to learn more about the constraints and qualities that make up REST. REST unlike home styles has had a tough time being consistently and practically applied. This may have been intentional. Leaving its actual implementation up to the designer. So you are free to do what you want so as long as you meet the constraints set out in the dissertation you are creating REST Systems. Bonus: The entire web is based on REST (or REST was based on the web). Therefore as a web developer you might want aware of that although it's not necessary to write good web apps. Saying you use the REST style is similar to saying that you built a house in a particular style: for example Tudor or Victorian. Both REST as an software style and Tudor or Victorian as a home style can be defined by the qualities and constraints that make them up. For example REST must have Client Server separation where messages are self-describing. Tudor style homes have Overlapping gables and Roofs that are steeply pitched with front facing gables. You can read Roy's dissertation to learn more about the constraints and qualities that make up REST. REST unlike home styles has had a tough time being consistently and practically applied. This may have been intentional. Leaving its actual implementation up to the designer. So you are free to do what you want so as long as you meet the constraints set out in the dissertation you are creating REST Systems. Bonus: The entire web is based on REST (or REST was based on the web). Therefore as a web developer you might want aware of that although it's not necessary to write good web apps. Saying you use the REST style is similar to saying that you built a house in a particular style: for example Tudor or Victorian. Both REST as an software style and Tudor or Victorian as a home style can be defined by the qualities and constraints that make them up. For example REST must have Client Server separation where messages are self-describing. Tudor style homes have Overlapping gables and Roofs that are steeply pitched with front facing gables. You can read Roy's dissertation to learn more about the constraints and qualities that make up REST. REST unlike home styles has had a tough time being consistently and practically applied. This may have been intentional. Leaving its actual implementation up to the designer. So you are free to do what you want so as long as you meet the constraints set out in the dissertation you are creating REST Systems. Bonus: The entire web is based on REST (or REST was based on the web). Therefore as a web developer you might want aware of that although it's not necessary to write good web apps. REST unlike home styles has had a tough time being consistently and practically applied. This may have been intentional. Leaving its actual implementation up to the designer. So you are free to do what you want so as long as you meet the constraints set out in the dissertation you are creating REST Systems. Bonus: The entire web is based on REST (or REST was based on the web). Therefore as a web developer you might want aware of that although it's not necessary to write good web apps. REST unlike home styles has had a tough time being consistently and practically applied. This may have been intentional. Leaving its actual implementation up to the designer. So you are free to do what you want so as long as you meet the constraints set out in the dissertation you are creating REST Systems. Bonus: The entire web is based on REST (or REST was based on the web). Therefore as a web developer you might want aware of that although it's not necessary to write good web apps. Bonus: The entire web is based on REST (or REST was based on the web). Therefore as a web developer you might want aware of that although it's not necessary to write good web apps. Bonus: The entire web is based on REST (or REST was based on the web). Therefore as a web developer you might want aware of that although it's not necessary to write good web apps. The entire web is based on REST (or REST was based on the web). Therefore as a web developer you might want aware of that although it's not necessary to write good web apps. The entire web is based on REST (or REST was based on the web). Therefore as a web developer you might want aware of that although it's not necessary to write good web apps.
Kal , 2017-03-31 03:12:53
Here is my basic outline of REST. I tried to demonstrate the thinking behind each of the components in a RESTful architecture so that understanding the concept is more intuitive. Hopefully this helps demystify REST for some people! REST (Representational State Transfer) is a design architecture that outlines how networked resources (i.e. nodes that share information) are designed and addressed. In general, a RESTful architecture makes it so that the client (the requesting machine) and the server (the responding machine) can request to read, write, and update data without the client having to know how the server operates and the server can pass it back without needing to know anything about the client. Okay, cool...but how do we do this in practice? Here is my basic outline of REST. I tried to demonstrate the thinking behind each of the components in a RESTful architecture so that understanding the concept is more intuitive. Hopefully this helps demystify REST for some people! REST (Representational State Transfer) is a design architecture that outlines how networked resources (i.e. nodes that share information) are designed and addressed. In general, a RESTful architecture makes it so that the client (the requesting machine) and the server (the responding machine) can request to read, write, and update data without the client having to know how the server operates and the server can pass it back without needing to know anything about the client. Okay, cool...but how do we do this in practice? REST (Representational State Transfer) is a design architecture that outlines how networked resources (i.e. nodes that share information) are designed and addressed. In general, a RESTful architecture makes it so that the client (the requesting machine) and the server (the responding machine) can request to read, write, and update data without the client having to know how the server operates and the server can pass it back without needing to know anything about the client. Okay, cool...but how do we do this in practice? REST (Representational State Transfer) is a design architecture that outlines how networked resources (i.e. nodes that share information) are designed and addressed. In general, a RESTful architecture makes it so that the client (the requesting machine) and the server (the responding machine) can request to read, write, and update data without the client having to know how the server operates and the server can pass it back without needing to know anything about the client. Okay, cool...but how do we do this in practice? But the REST architecture doesn't end there! While the above fulfills the basic needs of what we want, we also want to have an architecture that supports high volume traffic since any given server usually handles responses from a number of clients. Thus, we don't want to overwhelm the server by having it remember information about previous requests. But the REST architecture doesn't end there! While the above fulfills the basic needs of what we want, we also want to have an architecture that supports high volume traffic since any given server usually handles responses from a number of clients. Thus, we don't want to overwhelm the server by having it remember information about previous requests. Now, if all of this sounds familiar, then great. The Hypertext Transfer Protocol (HTTP), which defines the communication protocol via the World Wide Web is an implementation of the abstract notion of RESTful architecture (or an instance of the REST class if you're an OOP fanatic like me). In this implementation of REST, the client and server interact via GET, POST, PUT, DELETE, etc., which are part of the universal language and the resources can be pointed to using URLs. Now, if all of this sounds familiar, then great. The Hypertext Transfer Protocol (HTTP), which defines the communication protocol via the World Wide Web is an implementation of the abstract notion of RESTful architecture (or an instance of the REST class if you're an OOP fanatic like me). In this implementation of REST, the client and server interact via GET, POST, PUT, DELETE, etc., which are part of the universal language and the resources can be pointed to using URLs.
> ,
add a comment
minghua ,
I think the point of restful is the separation of the statefulness into a higher layer while making use of the internet (protocol) as a stateless transport layer . Most other approaches mix things up. It's been the best practical approach to handle the fundamental changes of programming in internet era. Regarding the fundamental changes, Erik Meijer has a discussion on show here: I think the point of restful is the separation of the statefulness into a higher layer while making use of the internet (protocol) as a stateless transport layer . Most other approaches mix things up. It's been the best practical approach to handle the fundamental changes of programming in internet era. Regarding the fundamental changes, Erik Meijer has a discussion on show here: It's been the best practical approach to handle the fundamental changes of programming in internet era. Regarding the fundamental changes, Erik Meijer has a discussion on show here: It's been the best practical approach to handle the fundamental changes of programming in internet era. Regarding the fundamental changes, Erik Meijer has a discussion on show here: http://www.infoq.com/interviews/erik-meijer-programming-language-design-effects-purity#view_93197 . He summarizes it as the five effects, and presents a solution by designing the solution into a programming language. The solution, could also be achieved in the platform or system level, regardless of the language. The restful could be seen as one of the solutions that has been very successful in the current practice. With restful style, you get and manipulate the state of the application across an unreliable internet. If it fails the current operation to get the correct and current state, it needs the zero-validation principal to help the application to continue. If it fails to manipulate the state, it usually uses multiple stages of confirmation to keep things correct. In this sense, rest is not itself a whole solution, it needs the functions in other part of the web application stack to support its working. Given this view point, the rest style is not really tied to internet or web application. It's a fundamental solution to many of the programming situations. It is not simple either, it just makes the interface really simple, and copes with other technologies amazingly well. Just my 2c. Edit: Two more important aspects: With restful style, you get and manipulate the state of the application across an unreliable internet. If it fails the current operation to get the correct and current state, it needs the zero-validation principal to help the application to continue. If it fails to manipulate the state, it usually uses multiple stages of confirmation to keep things correct. In this sense, rest is not itself a whole solution, it needs the functions in other part of the web application stack to support its working. Given this view point, the rest style is not really tied to internet or web application. It's a fundamental solution to many of the programming situations. It is not simple either, it just makes the interface really simple, and copes with other technologies amazingly well. Just my 2c. Edit: Two more important aspects: With restful style, you get and manipulate the state of the application across an unreliable internet. If it fails the current operation to get the correct and current state, it needs the zero-validation principal to help the application to continue. If it fails to manipulate the state, it usually uses multiple stages of confirmation to keep things correct. In this sense, rest is not itself a whole solution, it needs the functions in other part of the web application stack to support its working. Given this view point, the rest style is not really tied to internet or web application. It's a fundamental solution to many of the programming situations. It is not simple either, it just makes the interface really simple, and copes with other technologies amazingly well. Just my 2c. Edit: Two more important aspects: Given this view point, the rest style is not really tied to internet or web application. It's a fundamental solution to many of the programming situations. It is not simple either, it just makes the interface really simple, and copes with other technologies amazingly well. Just my 2c. Edit: Two more important aspects: Given this view point, the rest style is not really tied to internet or web application. It's a fundamental solution to many of the programming situations. It is not simple either, it just makes the interface really simple, and copes with other technologies amazingly well. Just my 2c. Edit: Two more important aspects: Just my 2c. Edit: Two more important aspects: Just my 2c. Edit: Two more important aspects: Edit: Two more important aspects: Edit: Two more important aspects:
minghua ,
A MVC viewpoint: The blog Rest Worst Practices suggested not to conflating models and resources . The book Two Scoops of django suggests that the Rest API is the view, and not to mix business logic into the view. The code for the app should remain in the app. � minghua Jun 25 '15 at 6:20
kalin ,
This is amazingly long "discussion" and yet quite confusing to say the least. IMO: 1) There is no such a thing as restful programing, without a big joint and lots of beer :) 2) Representational State Transfer (REST) is an architectural style specified in This is amazingly long "discussion" and yet quite confusing to say the least. IMO: 1) There is no such a thing as restful programing, without a big joint and lots of beer :) 2) Representational State Transfer (REST) is an architectural style specified in IMO: 1) There is no such a thing as restful programing, without a big joint and lots of beer :) 2) Representational State Transfer (REST) is an architectural style specified in IMO: 1) There is no such a thing as restful programing, without a big joint and lots of beer :) 2) Representational State Transfer (REST) is an architectural style specified in 1) There is no such a thing as restful programing, without a big joint and lots of beer :) 2) Representational State Transfer (REST) is an architectural style specified in 1) There is no such a thing as restful programing, without a big joint and lots of beer :) 2) Representational State Transfer (REST) is an architectural style specified in 2) Representational State Transfer (REST) is an architectural style specified in 2) Representational State Transfer (REST) is an architectural style specified in the dissertation of Roy Fielding . It has a number of constraints. If your Service/Client respect those then it is RESTful. This is it. You can summarize(significantly) the constraints to : You can summarize(significantly) the constraints to : You can summarize(significantly) the constraints to : There is another There is another very good post which explains things nicely. A lot of answers copy/pasted valid information mixing it and adding some confusion. People talk here about levels, about RESTFul URIs(there is not such a thing!), apply HTTP methods GET,POST,PUT ... REST is not about that or not only about that. For example links - it is nice to have a beautifully looking API but at the end the client/server does not really care of the links you get/send it is the content that matters. In the end any RESTful client should be able to consume to any RESTful service as long as the content format is known. A lot of answers copy/pasted valid information mixing it and adding some confusion. People talk here about levels, about RESTFul URIs(there is not such a thing!), apply HTTP methods GET,POST,PUT ... REST is not about that or not only about that. For example links - it is nice to have a beautifully looking API but at the end the client/server does not really care of the links you get/send it is the content that matters. In the end any RESTful client should be able to consume to any RESTful service as long as the content format is known. A lot of answers copy/pasted valid information mixing it and adding some confusion. People talk here about levels, about RESTFul URIs(there is not such a thing!), apply HTTP methods GET,POST,PUT ... REST is not about that or not only about that. For example links - it is nice to have a beautifully looking API but at the end the client/server does not really care of the links you get/send it is the content that matters. In the end any RESTful client should be able to consume to any RESTful service as long as the content format is known. For example links - it is nice to have a beautifully looking API but at the end the client/server does not really care of the links you get/send it is the content that matters. In the end any RESTful client should be able to consume to any RESTful service as long as the content format is known. For example links - it is nice to have a beautifully looking API but at the end the client/server does not really care of the links you get/send it is the content that matters. In the end any RESTful client should be able to consume to any RESTful service as long as the content format is known. In the end any RESTful client should be able to consume to any RESTful service as long as the content format is known. In the end any RESTful client should be able to consume to any RESTful service as long as the content format is known.
> ,
add a comment
Chris DaMour ,
Old question, newish way of answering. There's a lot of misconception out there about this concept. I always try to remember: Old question, newish way of answering. There's a lot of misconception out there about this concept. I always try to remember:
  1. Structured URLs and Http Methods/Verbs are not the definition of restful programming.
  2. JSON is not restful programming
  3. RESTful programming is not for APIs
I define restful programming as I define restful programming as
An application is restful if it provides resources (being the combination of data + state transitions controls) in a media type the client understands An application is restful if it provides resources (being the combination of data + state transitions controls) in a media type the client understands
To be a restful programmer you must be trying to build applications that allow actors to do things. Not just exposing the database. State transition controls only make sense if the client and server agree upon a media type representation of the resource. Otherwise there's no way to know what's a control and what isn't and how to execute a control. IE if browsers didn't know To be a restful programmer you must be trying to build applications that allow actors to do things. Not just exposing the database. State transition controls only make sense if the client and server agree upon a media type representation of the resource. Otherwise there's no way to know what's a control and what isn't and how to execute a control. IE if browsers didn't know State transition controls only make sense if the client and server agree upon a media type representation of the resource. Otherwise there's no way to know what's a control and what isn't and how to execute a control. IE if browsers didn't know State transition controls only make sense if the client and server agree upon a media type representation of the resource. Otherwise there's no way to know what's a control and what isn't and how to execute a control. IE if browsers didn't know <form> tags in html then there'd be nothing for you to submit to transition state in your browser. I'm not looking to self promote, but i expand on these ideas to great depth in my talk I'm not looking to self promote, but i expand on these ideas to great depth in my talk I'm not looking to self promote, but i expand on these ideas to great depth in my talk http://techblog.bodybuilding.com/2016/01/video-what-is-restful-200.html . An excerpt from my talk is about the often referred to richardson maturity model, i don't believe in the levels, you either are RESTful (level 3) or you are not, but what i like to call out about it is what each level does for you on your way to RESTful An excerpt from my talk is about the often referred to richardson maturity model, i don't believe in the levels, you either are RESTful (level 3) or you are not, but what i like to call out about it is what each level does for you on your way to RESTful An excerpt from my talk is about the often referred to richardson maturity model, i don't believe in the levels, you either are RESTful (level 3) or you are not, but what i like to call out about it is what each level does for you on your way to RESTful
> ,
add a comment
Jaider , 2017-10-02 21:23:50
REST defines 6 architectural constraints which make any web service � a true RESTful API . REST defines 6 architectural constraints which make any web service � a true RESTful API .
  1. Uniform interface
  2. Client�server
  3. Stateless
  4. Cacheable
  5. Layered system
  6. Code on demand (optional)
https://restfulapi.net/rest-architectural-constraints/
Roman Vottner ,
Fielding added some further rules RESTful APIs/clients have to adhere � Roman Vottner Oct 2 '17 at 22:09
Imran Ahmad ,
REST is an architectural style which is based on web-standards and the HTTP protocol (introduced in 2000). REST is an architectural style which is based on web-standards and the HTTP protocol (introduced in 2000).
In a REST based architecture, everything is a resource(Users, Orders, Comments). A resource is accessed via a common interface based on the HTTP standard methods(GET, PUT, PATCH, DELETE etc). In a REST based architecture, everything is a resource(Users, Orders, Comments). A resource is accessed via a common interface based on the HTTP standard methods(GET, PUT, PATCH, DELETE etc).
In a REST based architecture you have a REST server which provides access to the resources. A REST client can access and modify the REST resources. In a REST based architecture you have a REST server which provides access to the resources. A REST client can access and modify the REST resources.
Every resource should support the HTTP common operations. Resources are identified by global IDs (which are typically URIs). REST allows that resources have different representations, e.g., text, XML, JSON etc. The REST client can ask for a specific representation via the HTTP protocol (content negotiation). HTTP methods: The PUT, GET, POST and DELETE methods are typical used in REST based architectures. The following table gives an explanation of these operations. Every resource should support the HTTP common operations. Resources are identified by global IDs (which are typically URIs). REST allows that resources have different representations, e.g., text, XML, JSON etc. The REST client can ask for a specific representation via the HTTP protocol (content negotiation). HTTP methods: The PUT, GET, POST and DELETE methods are typical used in REST based architectures. The following table gives an explanation of these operations. REST allows that resources have different representations, e.g., text, XML, JSON etc. The REST client can ask for a specific representation via the HTTP protocol (content negotiation). HTTP methods: The PUT, GET, POST and DELETE methods are typical used in REST based architectures. The following table gives an explanation of these operations. REST allows that resources have different representations, e.g., text, XML, JSON etc. The REST client can ask for a specific representation via the HTTP protocol (content negotiation). HTTP methods: The PUT, GET, POST and DELETE methods are typical used in REST based architectures. The following table gives an explanation of these operations. HTTP methods: The PUT, GET, POST and DELETE methods are typical used in REST based architectures. The following table gives an explanation of these operations. HTTP methods: The PUT, GET, POST and DELETE methods are typical used in REST based architectures. The following table gives an explanation of these operations. The PUT, GET, POST and DELETE methods are typical used in REST based architectures. The following table gives an explanation of these operations. The PUT, GET, POST and DELETE methods are typical used in REST based architectures. The following table gives an explanation of these operations.
djvg ,
Several quotes, but not a single source mentioned. Where did you get this? � djvg Dec 13 '18 at 19:02
lokesh , 2016-06-03 11:35:49
REST === HTTP analogy is not correct until you do not stress to the fact that it "MUST" be REST === HTTP analogy is not correct until you do not stress to the fact that it "MUST" be HATEOAS driven. Roy himself cleared it Roy himself cleared it Roy himself cleared it here . A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). From that point on, all application state transitions must be driven by client selection of server-provided choices that are present in the received representations or implied by the user's manipulation of those representations. The transitions may be determined (or limited by) the client's knowledge of media types and resource communication mechanisms, both of which may be improved on-the-fly (e.g., code-on-demand). [Failure here implies that out-of-band information is driving interaction instead of hypertext.] A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). From that point on, all application state transitions must be driven by client selection of server-provided choices that are present in the received representations or implied by the user's manipulation of those representations. The transitions may be determined (or limited by) the client's knowledge of media types and resource communication mechanisms, both of which may be improved on-the-fly (e.g., code-on-demand). [Failure here implies that out-of-band information is driving interaction instead of hypertext.] A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). From that point on, all application state transitions must be driven by client selection of server-provided choices that are present in the received representations or implied by the user's manipulation of those representations. The transitions may be determined (or limited by) the client's knowledge of media types and resource communication mechanisms, both of which may be improved on-the-fly (e.g., code-on-demand). [Failure here implies that out-of-band information is driving interaction instead of hypertext.] [Failure here implies that out-of-band information is driving interaction instead of hypertext.] [Failure here implies that out-of-band information is driving interaction instead of hypertext.]
inf3rno ,
doesn't answer the question as wel as the others, but +1 for information that is relevant! � CybeX Oct 2 '17 at 19:06
GowriShankar ,
REST stands for Representational state transfer . It relies on a stateless, client-server, cacheable communications protocol -- and in virtually all cases, the HTTP protocol is used. REST is often used in mobile applications, social networking Web sites, mashup tools and automated business processes. The REST style emphasizes that interactions between clients and services is enhanced by having a limited number of operations (verbs). Flexibility is provided by assigning resources (nouns) their own unique universal resource indicators (URIs). REST stands for Representational state transfer . It relies on a stateless, client-server, cacheable communications protocol -- and in virtually all cases, the HTTP protocol is used. REST is often used in mobile applications, social networking Web sites, mashup tools and automated business processes. The REST style emphasizes that interactions between clients and services is enhanced by having a limited number of operations (verbs). Flexibility is provided by assigning resources (nouns) their own unique universal resource indicators (URIs). It relies on a stateless, client-server, cacheable communications protocol -- and in virtually all cases, the HTTP protocol is used. REST is often used in mobile applications, social networking Web sites, mashup tools and automated business processes. The REST style emphasizes that interactions between clients and services is enhanced by having a limited number of operations (verbs). Flexibility is provided by assigning resources (nouns) their own unique universal resource indicators (URIs). It relies on a stateless, client-server, cacheable communications protocol -- and in virtually all cases, the HTTP protocol is used. REST is often used in mobile applications, social networking Web sites, mashup tools and automated business processes. The REST style emphasizes that interactions between clients and services is enhanced by having a limited number of operations (verbs). Flexibility is provided by assigning resources (nouns) their own unique universal resource indicators (URIs). REST is often used in mobile applications, social networking Web sites, mashup tools and automated business processes. The REST style emphasizes that interactions between clients and services is enhanced by having a limited number of operations (verbs). Flexibility is provided by assigning resources (nouns) their own unique universal resource indicators (URIs). REST is often used in mobile applications, social networking Web sites, mashup tools and automated business processes. The REST style emphasizes that interactions between clients and services is enhanced by having a limited number of operations (verbs). Flexibility is provided by assigning resources (nouns) their own unique universal resource indicators (URIs). Introduction about Rest
> ,
add a comment
qmckinsey ,
Talking is more than simply exchanging information . A Protocol is actually designed so that no talking has to occur. Each party knows what their particular job is because it is specified in the protocol. Protocols allow for pure information exchange at the expense of having any changes in the possible actions. Talking, on the other hand, allows for one party to ask what further actions can be taken from the other party. They can even ask the same question twice and get two different answers, since the State of the other party may have changed in the interim. Talking is RESTful architecture . Fielding's thesis specifies the architecture that one would have to follow if one wanted to allow machines to talk to one another rather than simply communicate .
> ,
add a comment
ACV , 2016-08-24 17:57:29
There is not such notion as "RESTful programming" per se. It would be better called RESTful paradigm or even better RESTful architecture. It is not a programming language. It is a paradigm. There is not such notion as "RESTful programming" per se. It would be better called RESTful paradigm or even better RESTful architecture. It is not a programming language. It is a paradigm. From Wikipedia :
In computing, representational state transfer (REST) is an architectural style used for web development. In computing, representational state transfer (REST) is an architectural style used for web development.
> ,
add a comment
Benoit Essiambre , 2012-02-01 23:52:15
The point of rest is that if we agree to use a common language for basic operations (the http verbs), the infrastructure can be configured to understand them and optimize them properly, for example, by making use of caching headers to implement caching at all levels. With a properly implemented restful GET operation, it shouldn't matter if the information comes from your server's DB, your server's memcache, a CDN, a proxy's cache, your browser's cache or your browser's local storage. The fasted, most readily available up to date source can be used. Saying that Rest is just a syntactic change from using GET requests with an action parameter to using the available http verbs makes it look like it has no benefits and is purely cosmetic. The point is to use a language that can be understood and optimized by every part of the chain. If your GET operation has an action with side effects, you have to skip all HTTP caching or you'll end up with inconsistent results. The point of rest is that if we agree to use a common language for basic operations (the http verbs), the infrastructure can be configured to understand them and optimize them properly, for example, by making use of caching headers to implement caching at all levels. With a properly implemented restful GET operation, it shouldn't matter if the information comes from your server's DB, your server's memcache, a CDN, a proxy's cache, your browser's cache or your browser's local storage. The fasted, most readily available up to date source can be used. Saying that Rest is just a syntactic change from using GET requests with an action parameter to using the available http verbs makes it look like it has no benefits and is purely cosmetic. The point is to use a language that can be understood and optimized by every part of the chain. If your GET operation has an action with side effects, you have to skip all HTTP caching or you'll end up with inconsistent results. With a properly implemented restful GET operation, it shouldn't matter if the information comes from your server's DB, your server's memcache, a CDN, a proxy's cache, your browser's cache or your browser's local storage. The fasted, most readily available up to date source can be used. Saying that Rest is just a syntactic change from using GET requests with an action parameter to using the available http verbs makes it look like it has no benefits and is purely cosmetic. The point is to use a language that can be understood and optimized by every part of the chain. If your GET operation has an action with side effects, you have to skip all HTTP caching or you'll end up with inconsistent results. With a properly implemented restful GET operation, it shouldn't matter if the information comes from your server's DB, your server's memcache, a CDN, a proxy's cache, your browser's cache or your browser's local storage. The fasted, most readily available up to date source can be used. Saying that Rest is just a syntactic change from using GET requests with an action parameter to using the available http verbs makes it look like it has no benefits and is purely cosmetic. The point is to use a language that can be understood and optimized by every part of the chain. If your GET operation has an action with side effects, you have to skip all HTTP caching or you'll end up with inconsistent results. Saying that Rest is just a syntactic change from using GET requests with an action parameter to using the available http verbs makes it look like it has no benefits and is purely cosmetic. The point is to use a language that can be understood and optimized by every part of the chain. If your GET operation has an action with side effects, you have to skip all HTTP caching or you'll end up with inconsistent results. Saying that Rest is just a syntactic change from using GET requests with an action parameter to using the available http verbs makes it look like it has no benefits and is purely cosmetic. The point is to use a language that can be understood and optimized by every part of the chain. If your GET operation has an action with side effects, you have to skip all HTTP caching or you'll end up with inconsistent results.
osa ,
"Saying that Rest is just a syntactic change... makes it look like it has no benefits and is purely cosmetic" --- that's exactly why I am reading answers here on SO. Note that you did not explain, why REST is not purely cosmetic. � osa Oct 8 '13 at 17:14
kkashyap1707 , 2016-08-01 06:42:41
What is What is API Testing ? API testing utilizes programming to send calls to the API and get the yield. It testing regards the segment under test as a black box. The objective of API testing is to confirm right execution and blunder treatment of the part preceding its coordination into an application. REST API REST: Representational State Transfer. API testing utilizes programming to send calls to the API and get the yield. It testing regards the segment under test as a black box. The objective of API testing is to confirm right execution and blunder treatment of the part preceding its coordination into an application. REST API REST: Representational State Transfer. API testing utilizes programming to send calls to the API and get the yield. It testing regards the segment under test as a black box. The objective of API testing is to confirm right execution and blunder treatment of the part preceding its coordination into an application. REST API REST: Representational State Transfer. REST API REST: Representational State Transfer. REST API REST: Representational State Transfer. REST: Representational State Transfer. REST: Representational State Transfer. 4 Commonly Used API Methods:- 4 Commonly Used API Methods:-
  1. GET: � It provides read only access to a resource.
  2. POST: � It is used to create or update a new resource.
  3. PUT: � It is used to update or replace an existing resource or create a new resource.
  4. DELETE: � It is used to remove a resource.
Steps to Test API Manually:- To use API manually, we can use browser based REST API plugins. Steps to Test API Manually:- To use API manually, we can use browser based REST API plugins. To use API manually, we can use browser based REST API plugins. To use API manually, we can use browser based REST API plugins.
  1. Install POSTMAN(Chrome) / REST(Firefox) plugin
  2. Enter the API URL
  3. Select the REST method
  4. Select content-Header
  5. Enter Request JSON (POST)
  6. Click on send
  7. It will return output response
Steps to Automate REST API
therealprashant ,
this is not even a proper answer � therealprashant Aug 5 '16 at 7:17
Krishna Ganeriwal , 2017-08-29 11:55:15
This is very less mentioned everywhere but the Richardson's Maturity Model is one of the best methods to actually judge how Restful is one's API. More about it here: This is very less mentioned everywhere but the Richardson's Maturity Model is one of the best methods to actually judge how Restful is one's API. More about it here: Richardson's Maturity Model
Roman Vottner ,
If you look at the constraints Fielding put on REST you will clearly see that an API needs to have reached Layer 3 of the RMM in order to be viewed as RESTful, though this is simply not enough actually as there are still enough possibilities to fail the REST idea - the decoupling of clients from server APIs. Sure, Layer 3 fulfills the HATEOAS constraint but it is still easy to break the requirements and to couple clients tightly to a server (i.e. by using typed resources) � Roman Vottner Oct 2 '17 at 22:21
Lord , 2020-05-21 11:09:17
This answer is for absolute beginners, let's know about most used API architecture today. To understand Restful programming or Restful API. First, you have to understand what API is, on a very high-level API stands for Application Programming Interface, it's basically a piece of software that can be used by another piece of software in order to allow applications to talk to each other. The most widely used type of API in the globe is web APIs while an app that sends data to a client whenever a request comes in. This answer is for absolute beginners, let's know about most used API architecture today. To understand Restful programming or Restful API. First, you have to understand what API is, on a very high-level API stands for Application Programming Interface, it's basically a piece of software that can be used by another piece of software in order to allow applications to talk to each other. The most widely used type of API in the globe is web APIs while an app that sends data to a client whenever a request comes in. To understand Restful programming or Restful API. First, you have to understand what API is, on a very high-level API stands for Application Programming Interface, it's basically a piece of software that can be used by another piece of software in order to allow applications to talk to each other. The most widely used type of API in the globe is web APIs while an app that sends data to a client whenever a request comes in. To understand Restful programming or Restful API. First, you have to understand what API is, on a very high-level API stands for Application Programming Interface, it's basically a piece of software that can be used by another piece of software in order to allow applications to talk to each other. The most widely used type of API in the globe is web APIs while an app that sends data to a client whenever a request comes in. The most widely used type of API in the globe is web APIs while an app that sends data to a client whenever a request comes in. The most widely used type of API in the globe is web APIs while an app that sends data to a client whenever a request comes in. In fact, APIs aren't only used to send data and aren't always related to web development or javascript or python or any programming language or framework. The application in API can actually mean many different things as long as the pice of software is relatively stand-alone. Take for example, the File System or the HTTP Modules we can say that they are small pieces of software and we can use them, we can interact with them by using their API. For example when we use the read file function for a file system module of any programming language, we are actually using the file_system_reading API. Or when we do DOM manipulation in the browser, we're are not really using the JavaScript language itself, but rather, the DOM API that browser exposes to us, so it gives us access to it. Or even another example let's say we create a class in any programming language like Java and then add some public methods or properties to it, these methods will then be the API of each object created from that class because we are giving other pieces of software the possibility of interacting with our initial piece of software, the objects in this case. S0, API has actually a broader meaning than just building web APIs. Now let's take a look at the REST Architecture to build APIs. REST which stands for Representational State Transfer is basically a way of building web APIs in a logical way, making them easy to consume for ourselves or for others. To build Restful APIs following the REST Architecture, we just need to follow a couple of principles. 1. We need to separate our API into logical resources. 2. These resources should then be exposed by using resource-based URLs. 3. To perform different actions on data like reading, creating, or deleting data the API should use the right HTTP methods and not the URL. 4. Now the data that we actually send back to the client or that we received from the client should usually use the JSON data format, were some formatting standard applied to it. 5. Finally, another important principle of EST APIs is that they must be stateless. In fact, APIs aren't only used to send data and aren't always related to web development or javascript or python or any programming language or framework. The application in API can actually mean many different things as long as the pice of software is relatively stand-alone. Take for example, the File System or the HTTP Modules we can say that they are small pieces of software and we can use them, we can interact with them by using their API. For example when we use the read file function for a file system module of any programming language, we are actually using the file_system_reading API. Or when we do DOM manipulation in the browser, we're are not really using the JavaScript language itself, but rather, the DOM API that browser exposes to us, so it gives us access to it. Or even another example let's say we create a class in any programming language like Java and then add some public methods or properties to it, these methods will then be the API of each object created from that class because we are giving other pieces of software the possibility of interacting with our initial piece of software, the objects in this case. S0, API has actually a broader meaning than just building web APIs. Now let's take a look at the REST Architecture to build APIs. REST which stands for Representational State Transfer is basically a way of building web APIs in a logical way, making them easy to consume for ourselves or for others. To build Restful APIs following the REST Architecture, we just need to follow a couple of principles. 1. We need to separate our API into logical resources. 2. These resources should then be exposed by using resource-based URLs. 3. To perform different actions on data like reading, creating, or deleting data the API should use the right HTTP methods and not the URL. 4. Now the data that we actually send back to the client or that we received from the client should usually use the JSON data format, were some formatting standard applied to it. 5. Finally, another important principle of EST APIs is that they must be stateless. In fact, APIs aren't only used to send data and aren't always related to web development or javascript or python or any programming language or framework. The application in API can actually mean many different things as long as the pice of software is relatively stand-alone. Take for example, the File System or the HTTP Modules we can say that they are small pieces of software and we can use them, we can interact with them by using their API. For example when we use the read file function for a file system module of any programming language, we are actually using the file_system_reading API. Or when we do DOM manipulation in the browser, we're are not really using the JavaScript language itself, but rather, the DOM API that browser exposes to us, so it gives us access to it. Or even another example let's say we create a class in any programming language like Java and then add some public methods or properties to it, these methods will then be the API of each object created from that class because we are giving other pieces of software the possibility of interacting with our initial piece of software, the objects in this case. S0, API has actually a broader meaning than just building web APIs. Now let's take a look at the REST Architecture to build APIs. REST which stands for Representational State Transfer is basically a way of building web APIs in a logical way, making them easy to consume for ourselves or for others. To build Restful APIs following the REST Architecture, we just need to follow a couple of principles. 1. We need to separate our API into logical resources. 2. These resources should then be exposed by using resource-based URLs. 3. To perform different actions on data like reading, creating, or deleting data the API should use the right HTTP methods and not the URL. 4. Now the data that we actually send back to the client or that we received from the client should usually use the JSON data format, were some formatting standard applied to it. 5. Finally, another important principle of EST APIs is that they must be stateless. The application in API can actually mean many different things as long as the pice of software is relatively stand-alone. Take for example, the File System or the HTTP Modules we can say that they are small pieces of software and we can use them, we can interact with them by using their API. For example when we use the read file function for a file system module of any programming language, we are actually using the file_system_reading API. Or when we do DOM manipulation in the browser, we're are not really using the JavaScript language itself, but rather, the DOM API that browser exposes to us, so it gives us access to it. Or even another example let's say we create a class in any programming language like Java and then add some public methods or properties to it, these methods will then be the API of each object created from that class because we are giving other pieces of software the possibility of interacting with our initial piece of software, the objects in this case. S0, API has actually a broader meaning than just building web APIs. Now let's take a look at the REST Architecture to build APIs. REST which stands for Representational State Transfer is basically a way of building web APIs in a logical way, making them easy to consume for ourselves or for others. To build Restful APIs following the REST Architecture, we just need to follow a couple of principles. 1. We need to separate our API into logical resources. 2. These resources should then be exposed by using resource-based URLs. 3. To perform different actions on data like reading, creating, or deleting data the API should use the right HTTP methods and not the URL. 4. Now the data that we actually send back to the client or that we received from the client should usually use the JSON data format, were some formatting standard applied to it. 5. Finally, another important principle of EST APIs is that they must be stateless. The application in API can actually mean many different things as long as the pice of software is relatively stand-alone. Take for example, the File System or the HTTP Modules we can say that they are small pieces of software and we can use them, we can interact with them by using their API. For example when we use the read file function for a file system module of any programming language, we are actually using the file_system_reading API. Or when we do DOM manipulation in the browser, we're are not really using the JavaScript language itself, but rather, the DOM API that browser exposes to us, so it gives us access to it. Or even another example let's say we create a class in any programming language like Java and then add some public methods or properties to it, these methods will then be the API of each object created from that class because we are giving other pieces of software the possibility of interacting with our initial piece of software, the objects in this case. S0, API has actually a broader meaning than just building web APIs. Now let's take a look at the REST Architecture to build APIs. REST which stands for Representational State Transfer is basically a way of building web APIs in a logical way, making them easy to consume for ourselves or for others. To build Restful APIs following the REST Architecture, we just need to follow a couple of principles. 1. We need to separate our API into logical resources. 2. These resources should then be exposed by using resource-based URLs. 3. To perform different actions on data like reading, creating, or deleting data the API should use the right HTTP methods and not the URL. 4. Now the data that we actually send back to the client or that we received from the client should usually use the JSON data format, were some formatting standard applied to it. 5. Finally, another important principle of EST APIs is that they must be stateless. Now let's take a look at the REST Architecture to build APIs. REST which stands for Representational State Transfer is basically a way of building web APIs in a logical way, making them easy to consume for ourselves or for others. To build Restful APIs following the REST Architecture, we just need to follow a couple of principles. 1. We need to separate our API into logical resources. 2. These resources should then be exposed by using resource-based URLs. 3. To perform different actions on data like reading, creating, or deleting data the API should use the right HTTP methods and not the URL. 4. Now the data that we actually send back to the client or that we received from the client should usually use the JSON data format, were some formatting standard applied to it. 5. Finally, another important principle of EST APIs is that they must be stateless. Now let's take a look at the REST Architecture to build APIs. REST which stands for Representational State Transfer is basically a way of building web APIs in a logical way, making them easy to consume for ourselves or for others. To build Restful APIs following the REST Architecture, we just need to follow a couple of principles. 1. We need to separate our API into logical resources. 2. These resources should then be exposed by using resource-based URLs. 3. To perform different actions on data like reading, creating, or deleting data the API should use the right HTTP methods and not the URL. 4. Now the data that we actually send back to the client or that we received from the client should usually use the JSON data format, were some formatting standard applied to it. 5. Finally, another important principle of EST APIs is that they must be stateless. REST which stands for Representational State Transfer is basically a way of building web APIs in a logical way, making them easy to consume for ourselves or for others. To build Restful APIs following the REST Architecture, we just need to follow a couple of principles. 1. We need to separate our API into logical resources. 2. These resources should then be exposed by using resource-based URLs. 3. To perform different actions on data like reading, creating, or deleting data the API should use the right HTTP methods and not the URL. 4. Now the data that we actually send back to the client or that we received from the client should usually use the JSON data format, were some formatting standard applied to it. 5. Finally, another important principle of EST APIs is that they must be stateless. REST which stands for Representational State Transfer is basically a way of building web APIs in a logical way, making them easy to consume for ourselves or for others. To build Restful APIs following the REST Architecture, we just need to follow a couple of principles. 1. We need to separate our API into logical resources. 2. These resources should then be exposed by using resource-based URLs. 3. To perform different actions on data like reading, creating, or deleting data the API should use the right HTTP methods and not the URL. 4. Now the data that we actually send back to the client or that we received from the client should usually use the JSON data format, were some formatting standard applied to it. 5. Finally, another important principle of EST APIs is that they must be stateless. To build Restful APIs following the REST Architecture, we just need to follow a couple of principles. 1. We need to separate our API into logical resources. 2. These resources should then be exposed by using resource-based URLs. 3. To perform different actions on data like reading, creating, or deleting data the API should use the right HTTP methods and not the URL. 4. Now the data that we actually send back to the client or that we received from the client should usually use the JSON data format, were some formatting standard applied to it. 5. Finally, another important principle of EST APIs is that they must be stateless. To build Restful APIs following the REST Architecture, we just need to follow a couple of principles. 1. We need to separate our API into logical resources. 2. These resources should then be exposed by using resource-based URLs. 3. To perform different actions on data like reading, creating, or deleting data the API should use the right HTTP methods and not the URL. 4. Now the data that we actually send back to the client or that we received from the client should usually use the JSON data format, were some formatting standard applied to it. 5. Finally, another important principle of EST APIs is that they must be stateless. Separate APIs into logical resources: The key abstraction of information in REST is a resource, and therefore all the data that we wanna share in the API should be divided into logical resources. What actually is a resource? Well, in the context of REST it is an object or a representation of something which has some data associated to it. For example, applications like tour-guide tours, or users, places, or revies are of the example of logical resources. So basically any information that can be named can be a resource. Just has to name, though, not a verb. Separate APIs into logical resources: The key abstraction of information in REST is a resource, and therefore all the data that we wanna share in the API should be divided into logical resources. What actually is a resource? Well, in the context of REST it is an object or a representation of something which has some data associated to it. For example, applications like tour-guide tours, or users, places, or revies are of the example of logical resources. So basically any information that can be named can be a resource. Just has to name, though, not a verb. Separate APIs into logical resources: The key abstraction of information in REST is a resource, and therefore all the data that we wanna share in the API should be divided into logical resources. What actually is a resource? Well, in the context of REST it is an object or a representation of something which has some data associated to it. For example, applications like tour-guide tours, or users, places, or revies are of the example of logical resources. So basically any information that can be named can be a resource. Just has to name, though, not a verb. Expose Structure: Now we need to expose, which means to make available, the data using some structured URLs, that the client can send a request to. For example something like this entire address is called the URL. and this / addNewTour is called and API Endpoint. Expose Structure: Now we need to expose, which means to make available, the data using some structured URLs, that the client can send a request to. For example something like this entire address is called the URL. and this / addNewTour is called and API Endpoint. Expose Structure: Now we need to expose, which means to make available, the data using some structured URLs, that the client can send a request to. For example something like this entire address is called the URL. and this / addNewTour is called and API Endpoint. Our API will have many different endpoints just like bellow Our API will have many different endpoints just like bellow Our API will have many different endpoints just like bellow
https://www.tourguide.com/addNewTour
https://www.tourguide.com/getTour
https://www.tourguide.com/updateTour
https://www.tourguide.com/deleteTour
https://www.tourguide.com/getRoursByUser
https://www.tourguide.com/deleteToursByUser
Each of these API will send different data back to the client on also perform different actions. Now there is something very wrong with these endpoints here because they really don't follow the third rule which says that we should only use the HTTP methods in order to perform actions on data. So endpoints should only contain our resources and not the actions that we are performed on them because they will quickly become a nightmare to maintain. How should we use these HTTP methods in practice? Well let's see how these endpoints should actually look like starting with /getTour. So this getTour endpoint is to get data about a tour and so we should simply name the endpoint /tours and send the data whenever a get request is made to this endpoint. So in other words, when a client uses a GET HTTP method to access the endpoint, Each of these API will send different data back to the client on also perform different actions. Now there is something very wrong with these endpoints here because they really don't follow the third rule which says that we should only use the HTTP methods in order to perform actions on data. So endpoints should only contain our resources and not the actions that we are performed on them because they will quickly become a nightmare to maintain. How should we use these HTTP methods in practice? Well let's see how these endpoints should actually look like starting with /getTour. So this getTour endpoint is to get data about a tour and so we should simply name the endpoint /tours and send the data whenever a get request is made to this endpoint. So in other words, when a client uses a GET HTTP method to access the endpoint, How should we use these HTTP methods in practice? Well let's see how these endpoints should actually look like starting with /getTour. So this getTour endpoint is to get data about a tour and so we should simply name the endpoint /tours and send the data whenever a get request is made to this endpoint. So in other words, when a client uses a GET HTTP method to access the endpoint, How should we use these HTTP methods in practice? Well let's see how these endpoints should actually look like starting with /getTour. So this getTour endpoint is to get data about a tour and so we should simply name the endpoint /tours and send the data whenever a get request is made to this endpoint. So in other words, when a client uses a GET HTTP method to access the endpoint, (we only have resources in the endpoint or in the URL and no verbs because the verb is now in the HTTP method, right? The common practice to always use the resource name in the plural which is why I wrote /tours nor /tour.) The convention is that when calling endpoint /tours will get back all the tours that are in a database, but if we only want the tour with one ID, let's say seven, we add that seven after another slash(/tours/7) or in a search query (/tours?id=7), And of course, it could also be the name of a tour instead of the ID. HTTP Methods: What's really important here is how the endpoint name is the exact same name for all. (we only have resources in the endpoint or in the URL and no verbs because the verb is now in the HTTP method, right? The common practice to always use the resource name in the plural which is why I wrote /tours nor /tour.) The convention is that when calling endpoint /tours will get back all the tours that are in a database, but if we only want the tour with one ID, let's say seven, we add that seven after another slash(/tours/7) or in a search query (/tours?id=7), And of course, it could also be the name of a tour instead of the ID. HTTP Methods: What's really important here is how the endpoint name is the exact same name for all. (we only have resources in the endpoint or in the URL and no verbs because the verb is now in the HTTP method, right? The common practice to always use the resource name in the plural which is why I wrote /tours nor /tour.) The convention is that when calling endpoint /tours will get back all the tours that are in a database, but if we only want the tour with one ID, let's say seven, we add that seven after another slash(/tours/7) or in a search query (/tours?id=7), And of course, it could also be the name of a tour instead of the ID. HTTP Methods: What's really important here is how the endpoint name is the exact same name for all. HTTP Methods: What's really important here is how the endpoint name is the exact same name for all. HTTP Methods: What's really important here is how the endpoint name is the exact same name for all.
GET: (for requesting data from the server.)

https://www.tourguide.com/tours/7
POST: (for sending data to the server.)
https://www.tourguide.com/tours
PUT/PATCH: (for updating requests for data to the server.) https://www.tourguide.com/tours/7
DELETE: (for deleting request for data to the server.)
https://www.tourguide.com/tours/7
The difference between PUT and PATCH-> By using PUT, the client is supposed to send the entire updated object, while with PATCH it is supposed to send only the part of the object that has been changed. By using HTTP methods users can perform basic four CRUD operations, CRUD stands for Create, Read, Update, and Delete. Now there could be a situation like a bellow: The difference between PUT and PATCH-> By using PUT, the client is supposed to send the entire updated object, while with PATCH it is supposed to send only the part of the object that has been changed. By using HTTP methods users can perform basic four CRUD operations, CRUD stands for Create, Read, Update, and Delete. Now there could be a situation like a bellow: By using HTTP methods users can perform basic four CRUD operations, CRUD stands for Create, Read, Update, and Delete. Now there could be a situation like a bellow: By using HTTP methods users can perform basic four CRUD operations, CRUD stands for Create, Read, Update, and Delete. Now there could be a situation like a bellow: Now there could be a situation like a bellow: Now there could be a situation like a bellow: So, /getToursByUser can simply be translated to /users/tours, for user number 3 end point will be like /users/3/tours. if we want to delete a particular tour of a particular user then the URL should be like /users/3/tours/7, here user id:3 and tour id: 7. So there really are tons of possibilities of combining resources like this. Send data as JSON: Now about data that the client actually receives, or that the server receives from the client, usually we use the JSON Data Format. A typical JSON might look like below: So, /getToursByUser can simply be translated to /users/tours, for user number 3 end point will be like /users/3/tours. if we want to delete a particular tour of a particular user then the URL should be like /users/3/tours/7, here user id:3 and tour id: 7. So there really are tons of possibilities of combining resources like this. Send data as JSON: Now about data that the client actually receives, or that the server receives from the client, usually we use the JSON Data Format. A typical JSON might look like below: So, /getToursByUser can simply be translated to /users/tours, for user number 3 end point will be like /users/3/tours. if we want to delete a particular tour of a particular user then the URL should be like /users/3/tours/7, here user id:3 and tour id: 7. So there really are tons of possibilities of combining resources like this. Send data as JSON: Now about data that the client actually receives, or that the server receives from the client, usually we use the JSON Data Format. A typical JSON might look like below: if we want to delete a particular tour of a particular user then the URL should be like /users/3/tours/7, here user id:3 and tour id: 7. So there really are tons of possibilities of combining resources like this. Send data as JSON: Now about data that the client actually receives, or that the server receives from the client, usually we use the JSON Data Format. A typical JSON might look like below: if we want to delete a particular tour of a particular user then the URL should be like /users/3/tours/7, here user id:3 and tour id: 7. So there really are tons of possibilities of combining resources like this. Send data as JSON: Now about data that the client actually receives, or that the server receives from the client, usually we use the JSON Data Format. A typical JSON might look like below: So there really are tons of possibilities of combining resources like this. Send data as JSON: Now about data that the client actually receives, or that the server receives from the client, usually we use the JSON Data Format. A typical JSON might look like below: So there really are tons of possibilities of combining resources like this. Send data as JSON: Now about data that the client actually receives, or that the server receives from the client, usually we use the JSON Data Format. A typical JSON might look like below: Send data as JSON: Now about data that the client actually receives, or that the server receives from the client, usually we use the JSON Data Format. A typical JSON might look like below: Send data as JSON: Now about data that the client actually receives, or that the server receives from the client, usually we use the JSON Data Format. A typical JSON might look like below: Before sending JSON Data we usually do some simple response formatting, there are a couple of standards for this, but one of the very simple ones called Jsend. We simply create a new object, then add a status message to it in order to inform the client whether the request was a success, fail, or error. And then we put our original data into a new object called Data. Wrapping the data into an additional object like we did here is called Enveloping, and it's a common practice to mitigate some security issues and other problems. Restful API should always be stateless: Finally a RESTful API should always be stateless meaning that, in a stateless RESTful API all state is handled on the client side no on the server. And state simply refers to a piece of data in the application that might change over time. For example, whether a certain user is logged in or on a page with a list with several pages what the current page is? Now the fact that the state should be handled on the client means that each request must contain all the information that is necessary to process a certain request on the server. So the server should never ever have to remember the previous request in order to process the current request. Wrapping the data into an additional object like we did here is called Enveloping, and it's a common practice to mitigate some security issues and other problems. Restful API should always be stateless: Finally a RESTful API should always be stateless meaning that, in a stateless RESTful API all state is handled on the client side no on the server. And state simply refers to a piece of data in the application that might change over time. For example, whether a certain user is logged in or on a page with a list with several pages what the current page is? Now the fact that the state should be handled on the client means that each request must contain all the information that is necessary to process a certain request on the server. So the server should never ever have to remember the previous request in order to process the current request. Wrapping the data into an additional object like we did here is called Enveloping, and it's a common practice to mitigate some security issues and other problems. Restful API should always be stateless: Finally a RESTful API should always be stateless meaning that, in a stateless RESTful API all state is handled on the client side no on the server. And state simply refers to a piece of data in the application that might change over time. For example, whether a certain user is logged in or on a page with a list with several pages what the current page is? Now the fact that the state should be handled on the client means that each request must contain all the information that is necessary to process a certain request on the server. So the server should never ever have to remember the previous request in order to process the current request. Restful API should always be stateless: Finally a RESTful API should always be stateless meaning that, in a stateless RESTful API all state is handled on the client side no on the server. And state simply refers to a piece of data in the application that might change over time. For example, whether a certain user is logged in or on a page with a list with several pages what the current page is? Now the fact that the state should be handled on the client means that each request must contain all the information that is necessary to process a certain request on the server. So the server should never ever have to remember the previous request in order to process the current request. Restful API should always be stateless: Finally a RESTful API should always be stateless meaning that, in a stateless RESTful API all state is handled on the client side no on the server. And state simply refers to a piece of data in the application that might change over time. For example, whether a certain user is logged in or on a page with a list with several pages what the current page is? Now the fact that the state should be handled on the client means that each request must contain all the information that is necessary to process a certain request on the server. So the server should never ever have to remember the previous request in order to process the current request. Let's say that currently we are on page five and we want to move forward to page six. Sow we could have a simple endpoint called /tours/nextPage and submit a request to server, but the server would then have to figure out what the current page is, and based on that server will send the next page to the client. In other words, the server would have to remember the previous request. This is what exactly we want to avoid in RESTful APIs. Instead of this case, we should create a /tours/page endpoint and paste the number six to it in order to request page number six /tours/page/6 . So the server doesn't have to remember anything in, all it has to do is to send back data for page number six as we requested. Statelessness and Statefulness which is the opposite are very important concepts in computer science and applications in general Let's say that currently we are on page five and we want to move forward to page six. Sow we could have a simple endpoint called /tours/nextPage and submit a request to server, but the server would then have to figure out what the current page is, and based on that server will send the next page to the client. In other words, the server would have to remember the previous request. This is what exactly we want to avoid in RESTful APIs. Instead of this case, we should create a /tours/page endpoint and paste the number six to it in order to request page number six /tours/page/6 . So the server doesn't have to remember anything in, all it has to do is to send back data for page number six as we requested. Statelessness and Statefulness which is the opposite are very important concepts in computer science and applications in general Let's say that currently we are on page five and we want to move forward to page six. Sow we could have a simple endpoint called /tours/nextPage and submit a request to server, but the server would then have to figure out what the current page is, and based on that server will send the next page to the client. In other words, the server would have to remember the previous request. This is what exactly we want to avoid in RESTful APIs. Instead of this case, we should create a /tours/page endpoint and paste the number six to it in order to request page number six /tours/page/6 . So the server doesn't have to remember anything in, all it has to do is to send back data for page number six as we requested. Statelessness and Statefulness which is the opposite are very important concepts in computer science and applications in general Instead of this case, we should create a /tours/page endpoint and paste the number six to it in order to request page number six /tours/page/6 . So the server doesn't have to remember anything in, all it has to do is to send back data for page number six as we requested. Statelessness and Statefulness which is the opposite are very important concepts in computer science and applications in general Instead of this case, we should create a /tours/page endpoint and paste the number six to it in order to request page number six /tours/page/6 . So the server doesn't have to remember anything in, all it has to do is to send back data for page number six as we requested. Statelessness and Statefulness which is the opposite are very important concepts in computer science and applications in general Statelessness and Statefulness which is the opposite are very important concepts in computer science and applications in general Statelessness and Statefulness which is the opposite are very important concepts in computer science and applications in general

[Jul 14, 2020] Red Hat Ceph Storage 4 arrives

Jul 14, 2020 | www.zdnet.com

Do you need really serious software-defined storage to handle petabytes of data? Then, Red Hat , with the latest edition of Red Hat Ceph Storage (RHCS) , has the technology you need.

STORAGE The 3 biggest storage advances of the 2010s Tiny 8TB SanDisk SSD: It's 'world's highest capacity' pocket-sized portable drive Samsung fingerprint-protected T7 Touch portable SSD Top ways to manage cloud complexity (ZDNet YouTube) Best Storage Devices (CNET) Flash storage: A guide for IT pros (TechRepublic)

RHCS is based on the Nautilus version of the Ceph open-source storage project. It's designed to work on commercial off-the-shelf (COTS) hardware. But, with its ability to handle petabytes of data, you're most likely to use it on data-farms, data-centers, and clouds.

For example, you can use it to deploy petabyte-scale, Amazon Simplified Storage Service (S3) -compatible object storage. Red Hat claims that, in recent internal testing, RHCS 4 "delivered over a two-time performance boost for write-intensive workloads, making it even better-suited to fulfill the performance needs of today's data-intensive applications."

It's also been DevOps -optimized, so you can use RHCS 4 to move from storage-centric to service-centric operational models. To do this, it relies on improved Ansible DevOps integration .

This helps RHCS with self-managing and self-healing. This, in turn, makes automated backup, recovery, and provisioning easier and -- what's perhaps even more important -- more reliable. Red Hat states this will help enterprises looking for business continuity "always-on" service level agreements (SLA).

Red Hat Ceph Storage 4 includes four significant new features. These are:

A simplified installer experience, which enables standard installations that can be performed in less than 10 minutes. A new management dashboard for a unified, "heads up" view of operations at all times, helping teams to identify and resolve problems more quickly. A new quality of service (QoS)monitoring feature, which helps verify storage QoS for applications in a multi-tenant, hosted cloud environment. Integrated bucket notifications to support Kubernetes-native serverless architectures, which enable automated data pipelines.

Its parent open-source parent project, Ceph , is a distributed object store and file system. It's designed from the get-go to provide excellent big data performance, reliability, and scalability. It supports object, block, and file storage.

Quick Guide: Lock Down the IT Department

Safeguarding your company's systems and information assets requires vigilance on many fronts, but it's difficult to stay on top of every vulnerability. That's where Quick Guide: Lock Down the IT Department can help. This valuable reference offers... eBooks provided by TechRepublic Premium

Amita Potnis, the IDC research director for Infrastructure Systems, likes this new release.

In a statement, Pontis said: "The massive growth of data and emerging workloads are challenges faced by many organizations. Red Hat Ceph Storage 4 can enable businesses to efficiently scale and support ever-growing data and workload requirements while providing simplified installation and management."

RHCS 4 is available today.

[Oct 08, 2019] How does converting from raid 5 to 6 work? (on the back end)

Oct 08, 2019 | www.reddit.com

Gnonthgol

4 points · 13 hours ago

RAID 5 stripes the data over N disks with an additional stripe containing the parity, basically the XOR of all the other disks. RAID 6 use the same parity as RAID 5 but also use a different type of parity on an extra disk. So RAID 5 requires N+1 disks and RAID 6 requires N+2 disks. So in theory you can just add another disk and fill it with the different parity bit and you have a RAID 6, however it is not that simple. The parity disks on both RAID 5 and 6 rotates for each stripe. So if the parity is stored on disk 1 for the first stripe it is stored on disk 2 on the second and so forth. So if you add an additional disk all the stripes needs to be rewritten in the new schema. Some RAID controllers have this fuctionality. The tricky thing is that you need to track how far you have gone so that in the case of a power failure you can still retrieve the data. In any case it does require another disk.

OnARedditDiet Windows Admin 4 points · 14 hours ago

http://www.ewams.net/?date=2013/05/02&view=Converting_RAID5_to_RAID6_in_mdadm

You're going to put your RAID in degraded mode so you're basically causing it to be in a one drive failed scenario and then asking it to rewrite every disk. Is that something you want to do? level 2

Dry_Soda 6 points · 12 hours ago

What could possibly go wrong? #YOLO level 3

25cmshlong OCP DBA 12c, OCE 12c, OCP Solaris 11, RHCE, NCSE ONTAP, CCNA R&S 1 point · 9 hours ago

Not much. It is adding another parity disk so worst case array will be left in initial state - single parity (RAID5).

(Ofc truly worst case is that reading all the drives will overload power supply and fry whole disk subsystem. But it is better not to think about it since RAID6 will not help there either :) level 4

OnARedditDiet Windows Admin 1 point · 9 hours ago

It's very well known that a full read/write pass that comes from rebuilding a degraded RAID can potentially crash the RAID by exposing existing hard drive issues. In this case nothing has failed but you could crash the RAID by fixing a non fault situation. level 5

25cmshlong OCP DBA 12c, OCE 12c, OCP Solaris 11, RHCE, NCSE ONTAP, CCNA R&S 1 point · 8 hours ago
· edited 5 hours ago

EDIT: Oops, I remembered that in most implementations of RAID (ie, not ZFS & WAFL) there is no dedicated parity/dparity drives, but instead rotating parity. So there will definitely reading and rewriting on all disks of the array.

So text below is incorrect for most RAID subsystems

That's true but not a concern during adding parity disk. If some latent stripes appears they can be recovered using original parity. All writes during conversion will go to the new parity disks, original data on drives stays intact level 5

drbluetongue Drunk while on-call 1 point · 5 hours ago

I don't know why you were downvoted - the most likely time you will get a disk failure is during a rebuild of an array. I've had one fail during rebuild that was from the same batch as the already failed disk in a RAID 6, thank god it was RAID 6...

Nowdays, at least at my old job, we made sure to ask the vendor for disks for the SAN's to be randomised

[Aug 31, 2019] The Linux Programming Interface

Aug 31, 2019 | books.slashdot.org

73 "Michael Kerrisk has been the maintainer of the Linux Man Pages collection (man 7) for more than five years now, and it is safe to say that he has contributed to the Linux documentation available in the online manual more than any other author before. For this reason he has been the recipient a few years back of a Linux Foundation fellowship meant to allow him to devote his full time to the furthering this endeavor. His book is entirely focused on the system interface and environment Linux (and, to some extent, any *NIX system) provides to a programmer. My most obvious choice for a comparison of the same caliber is Michael K. Johnson and Eric W. Troan's venerable Linux Application Development , the second edition of which was released in 2004 and is somewhat in need of a refresh, lamentably because it is an awesome book that belongs on any programmer's shelf. While Johnson and Troan have introduced a whole lot of programmers to the pleasure of coding to Linux's APIs, their approach is that of a nicely flowing tutorial, not necessarily complete, but unusually captivating and very suitable to academic use. Michael's book is a different kind of beast: while the older tome selects exquisite material, it is nowhere as complete as his -- everything relating to the subject that I could reasonably think of is in the book, in a very thorough and maniacally complete yet enjoyably readable way -- I did find one humorous exception, more on that later. Keep reading for the rest of Federico's review.

The Linux Programming Interface
author Michael Kerrisk
pages 1552
publisher No Starch Press
rating 8/10
reviewer Federico Lucifredi
ISBN 9781593272203
summary The definitive guide to the Linux and UNIX programming interface
This book is an unusual, if not altogether unique, entry into the Linux programming library: for one, it is a work of encyclopedic breadth and depth, spanning in great detail concepts usually spread in a multitude of medium-sized books, but by this yardstick the book is actually rather concise, as it is neatly segmented in 64 nearly self-contained chapters that work very nicely as short, deep-dive technical guides. I have collected an extremely complete technical library over the years, and pretty much any book of significance that came out of the Linux and Bell Labs communities is in it -- it is about 4 shelves, and it is far from portable. It is very nice to be able to reach out and pick the definitive work on IPC, POSIX threads, or one of several socket programming guides -- not least because having read them, I know what and where to pick from them. But for those out there who have not invested so much time, money, and sweat moving so many books around, Kerrisk's work is priceless: any subject be it timers, UNIX signals, memory allocation or the most classical of topics (file I/O) gets its deserved 15-30 page treatment, and you can pick just what you need, in any order.

Weighing in at 1552 pages, this book is second only to Charles Kozierok's mighty TCP/IP Guide in length in the No Starch Press catalog. Anyone who has heard me comment about books knows I usually look askance at anything beyond the 500-page mark, regarding it as something defective in structure that fails the "I have no time to read all that" test. In the case of Kerrisk's work, however, just as in the case of Kozierok's, actually, I am happy to waive my own rule, as these heavyweights in the publisher's catalog are really encyclopedias, and despite my bigger library I will like to keep this single tome within easy reach of my desk to avoid having to fetch the other tomes for quick lookups -- yes, I still have lazy programmer blood in my veins.

There is another perspective to this: while writing, I took a break and while wandering around I found myself in Miguel's office (don't tell him ;-), and there spotted a Bell Labs book lying on his shelf that (incredibly) I have never heard of. After a quick visit to AbeBooks to take care of this embarrassing matter, I am back here writing to use this incident as a valuable example: the classic system programming books, albeit timeless in their own way, show their rust when it comes to newer and more esoteric Linux system calls (mmap and inotify are fair examples) and even entire subsystems in some cases -- and that's another place where this book shines: it is not only very complete, it is really up to date, a combination I cannot think of a credible alternative to in today's available book offerings.

One more specialized but particularly unique property of this book is that it can be quite helpful in navigating what belongs to what standard, be it POSIX, X/Open, SUS, LSB, FHS, and what not. Perhaps it is not entirely complete in this, but it is more helpful than anything else I have seen released since Donald Lewine's ancient POSIX Programmers Guide (O'Reilly). Standards conformance is a painful topic, but one you inevitably stumble into when writing code meant to compile and run not only on Linux but to cross over to the BSDs or farther yet to other *NIX variants. If you have to deal with that kind of divine punishment, this book, together with the Glibc documentation, is a helpful palliative as it will let you know what is not available on other platforms, and sometimes even what alternatives you may have, for example, on the BSDs.

If you are considering the purchase, head over to Amazon and check out the table of contents, you will be impressed. The Linux Programming Encyclopedia would have been a perfectly adequate title for it in my opinion. In closing, I mentioned that after thinking for a good while I found one thing to be missing in this book: next to the appendixes on tracing, casting the null pointer, parsing command-line options, and building a kernel configuration, a tutorial on writing man pages was sorely and direly missing! Michael, what were you thinking?

Federico Lucifredi is the maintainer of man (1) and a Product Manager for the SUSE Linux Enterprise and openSUSE distributions.

You can purchase The Linux Programming Interface from amazon.com . Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines , then visit the submission page .

[Dec 15, 2010] "Beej" UNIX Inter Process Communication (IPC) tutorial By Brian Hall

Explains the different aspects of traditional UNIX Inter Process Communication (IPC). Brian Hall provides a lot of C code where you can compile / test these concepts yourself for a better understanding.

[Mar 21, 2007] Kernel command using Linux system calls

21 Mar 2007 (IBM Developerworks) Linux� system calls -- we use them every day. But do you know how a system call is performed from user-space to the kernel? Explore the Linux system call interface (SCI), learn how to add new system calls (and alternatives for doing so), and discover utilities related to the SCI.

A system call is an interface between a user-space application and a service that the kernel provides. Because the service is provided in the kernel, a direct call cannot be performed; instead, you must use a process of crossing the user-space/kernel boundary. The way you do this differs based on the particular architecture. For this reason, I'll stick to the most common architecture, i386.

In this article, I explore the Linux SCI, demonstrate adding a system call to the 2.6.20 kernel, and then use this function from user-space. I also investigate some of the functions that you'll find useful for system call development and alternatives to system calls. Finally, I look at some of the ancillary mechanisms related to system calls, such as tracing their usage from a given process.

The SCI

The implementation of system calls in Linux is varied based on the architecture, but it can also differ within a given architecture. For example, older x86 processors used an interrupt mechanism to migrate from user-space to kernel-space, but new IA-32 processors provide instructions that optimize this transition (using sysenter and sysexit instructions). Because so many options exist and the end-result is so complicated, I'll stick to a surface-level discussion of the interface details. See the Resources at the end of this article for the gory details.

You needn't fully understand the internals of the SCI to amend it, so I explore a simple version of the system call process (see Figure 1). Each system call is multiplexed into the kernel through a single entry point. The eax register is used to identify the particular system call that should be invoked, which is specified in the C library (per the call from the user-space application). When the C library has loaded the system call index and any arguments, a software interrupt is invoked (interrupt 0x80), which results in execution (through the interrupt handler) of the system_call function. This function handles all system calls, as identified by the contents of eax. After a few simple tests, the actual system call is invoked using the system_call_table and index contained in eax. Upon return from the system call, syscall_exit is eventually reached, and a call to resume_userspace transitions back to user-space. Execution resumes in the C library, which then returns to the user application.

[Jan 3, 2005] Has UNIX Programming Changed in 20 Years By Marc Rochkind.

If all the basics are the same, what has changed? Well, these things:

More System Calls

The number of system calls has quadrupled, more or less, depending on what you mean by "system call." The first edition of Advanced UNIX Programming focused on only about 70 genuine kernel system calls-for example, open, read, and write; but not library calls like fopen, fread, and fwrite. The second edition includes about 300. (There are about 1,100 standard function calls in all, but many of those are part of the Standard C Library or are obviously not kernel facilities.) Today's UNIX has threads, real-time signals, asynchronous I/O, and new interprocess-communication features (POSIX IPC), none of which existed 20 years ago. This has caused, or been caused by, the evolution of UNIX from an educational and research system to a universal operating system. It shows up in embedded systems (parking meters, digital video recorders); inside Macintoshes; on a few million web servers; and is even becoming a desktop system for the masses. All of these uses were unanticipated in 1984.

More Languages

In 1984, UNIX applications were usually programmed in C, occasionally mixed with shell scripts, Awk, and Fortran. C++ was just emerging; it was implemented as a front end to the C compiler. Today, C is no longer the principal UNIX application language, although it's still important for low-level programming and as a reference language. (All the examples in both books are written in C.) C++ is efficient enough to have replaced C when the application requirements justify the extra effort, but many projects use Java instead, and I've never met a programmer who didn't prefer it over C++. Computers are fast enough so that interpretive scripting languages have become important, too, led by Perl and Python. Then there are the web languages: HTML, JavaScript, and the various XML languages, such as XSLT.

Even if you're working in one of these modern languages, though, you still need to know what going on "down below," because UNIX still defines-and, to a degree, limits-what the higher-level languages can do. This is a challenge for many students who want to learn UNIX, but don't want to learn C. And for their teachers, who tire of debugging memory problems and explaining the distinction between declarations and definitions.

TIP

To enable students to learn UNIX without first learning C, I developed a Java-to-UNIX system-call interface that I call Jtux. It allows almost all of the UNIX system calls to be executed from Java, using the same arguments and datatypes as the official C calls. You can find out more about Jtux and download its source code from http://basepath.com/aup/.

More Subsystems

The third area of change is that UNIX is both more visible than ever (sold by Wal-Mart!) and more hidden, underneath subsystems like J2EE and web servers, Apache, Oracle, and desktops such as KDE or GNOME. Many application programmers are programming for these subsystems, rather than for UNIX directly. What's more, the subsystems themselves are usually insulated from UNIX by a thin portability layer that has different implementations for different operating systems. Thus, many UNIX system programmers these days are working on middleware, rather than on the end-user applications that are several layers higher up.

More Portability

The fourth change is the requirement for portability between UNIX systems, including Linux and the BSD-derivatives, one of which is the Macintosh OS X kernel (Darwin). Portability was of some interest in 1984, but today it's essential. No developer wants to be locked into a commercial version of UNIX without the possibility of moving to Linux or BSD, and no Linux developer wants to be locked into only one distribution. Platforms like Java help a lot, but only serious attention to the kernel APIs, along with careful testing, will ensure that the code is really portable. Indeed, you almost never hear a developer say that he or she is writing for XYZ's UNIX. It's much more common to hear "UNIX and Linux," implying that the vendor choice will be made later. (The three biggest proprietary UNIX hardware companies-Sun, HP, and IBM-are all strong supporters of Linux.)

More Complete Standards

The requirement for portability is connected with the fifth area of change, the role of standards. In 1984, a UNIX standards effort was just starting. The IEEE's POSIX group hadn't yet been formed. Its first standard, which emerged in 1988, was a tremendous effort of exceptional quality and rigor, but it was of very little use to real-world developers because it left out too many APIs, such as those for interprocess communication and networking. That minimalist approach to standards changed dramatically when The Open Group was formed from the merger of X/Open and the Open Software Foundation in 1996. Its objective was to include all the APIs that the important applications were using, and to specify them as well as time allowed-which meant less precisely than POSIX did. They even named one of their standards Spec 1170, the number being the total of 926 APIs, 70 headers, and 174 commands. Quantity over quality, maybe, but the result meant that for the first time programmers would find in the standard the APIs they really needed. Today, The Open Group's Single UNIX Specification is the best guide for UNIX programmers who need to write portably.

[Nov 29, 2004] The Canberra University views of Processes and Process Management

[Aug 20, 2004] Manipulating Files And Directories In Unix Copyright (c) 1998-2002 by guy keren.

The following tutorial describes various common methods for reading and writing files and directories on a Unix system. Part of the information is common C knowledge, and is repeated here for completeness. Other information is Unix-specific, although DOS programmers will find some of it similar to what they saw in various DOS compilers. If you are a proficient C programmer, and know everything about the standard I/O functions, its buffering operations, and know functions such as fseek() or fread(), you may skip the standard C library I/O functions section. If in doubt, at least skim through this section, to catch up on things you might not be familiar with, and at least look at the standard C library examples.

  • This document is copyright (c) 1998-2002 by guy keren.

    The material in this document is provided AS IS, without any expressed or implied warranty, or claim of fitness for a particular purpose. Neither the author nor any contributers shell be liable for any damages incured directly or indirectly by using the material contained in this document.

    permission to copy this document (electronically or on paper, for personal or organization internal use) or publish it on-line is hereby granted, provided that the document is copied as-is, this copyright notice is preserved, and a link to the original document is written in the document's body, or in the page linking to the copy of this document.

    Permission to make translations of this document is also granted, under these terms - assuming the translation preserves the meaning of the text, the copyright notice is preserved as-is, and a link to the original document is written in the document's body, or in the page linking to the copy of this document.

    For any questions about the document and its license, please contact the author.

  • UNIX System Calls

    [Apr 17, 2003] Exploring processes with Truss: Part 1 By Sandra Henry-Stocker

    The ps command can tell you quite a few things about each process running on your system. These include the process owner, memory use, accumulated time, the process status (e.g., waiting on resources) and many other things as well. But one thing that ps cannot tell you is what a process is doing - what files it is using, what ports it has opened, what libraries it is using and what system calls it is making. If you can't look at source code to determine how a program works, you can tell a lot about it by using a procedure called "tracing". When you trace a process (e.g., truss date), you get verbose commentary on the process' actions. For example, you will see a line like this each time the program opens a file:

    open("/usr/lib/libc.so.1", O_RDONLY) = 4

    The text on the left side of the equals sign clearly indicates what is happening. The program is trying to open the file /usr/lib/libc.so.1 and it's trying to open it in read-only mode (as you would expect, given that this is a system library). The right side is not nearly as self-evident. We have just the number 4. Open is not a Unix command, of course, but a system call. That means that you can only use the command within a program. Due to the nature of Unix, however, system calls are documented in man pages just like ls and pwd.

    To determine what this number represents, you can skip down in this column or you can read the man page. If you elect to read the man page, you will undoubtedly read a line that tells you that the open() function returns a file descriptor for the named file. In other words, the number, 4 in our example, is the number of the file descriptor referred to in this open call. If the process that you are tracing opens a number of files, you will see a sequence of open calls. With other activity removed, the list might look something like this:

    open("/dev/zero", O_RDONLY) = 3

    open("/var/ld/ld.config", O_RDONLY) Err#2 ENOENT

    open("/usr/lib/libc.so.1", O_RDONLY) = 4

    open("/usr/lib/libdl.so.1", O_RDONLY) = 4

    open64("./../", O_RDONLY|O_NDELAY) = 3

    open64("./../../", O_RDONLY|O_NDELAY) = 3

    open("/etc/mnttab", O_RDONLY) = 4

    Notice that the first file handle is 3 and that file handles 3 and 4 are used repeatedly. The initial file handle is always 3. This indicates that it is the first file handle following those that are the same for every process that you will run - 0, 1 and 2. These represent standard in, standard out and standard error.

    The file handles shown in the example truss output above are repeated only because the associated files are subsequently closed. When a file is closed, the file handle that was used to access it can be used again.

    The close commands include only the file handle, since the location of the file is known. A close command would, therefore, be something like close(3). One of the lines shown above displays a different response - Err#2

    ENOENT. This "error" (the word is put in quotes because this does not necessarily indicate that the process is defective in any way) indicates that the file the open call is attempting to open does not exist. Read "ENOENT" as "No such file".

    Some open calls place multiple restrictions on the way that a file is opened. The open64 calls in the example output above, for example, specify both O_RDONLY and O_NDELAY. Again, reading the man page will help you to understand what each of these specifications means and will present with a list of other options as well.

    As you might expect, open is only one of many system calls that you will see when you run the truss command. Next week we will look at some additional system calls and determine what they are doing.

    Exploring processes with Truss: part 2 By Sandra Henry-Stocker

    While truss and its cousins on non-Solaris systems (e.g., strace on Linux and ktrace on many BSD systems) provide a lot of data on what a running process is doing, this information is only useful if you know what it means. Last week, we looked at the open call and the file handles that are returned by the call to open(). This week, we look at some other system calls and analyze what these system calls are doing. You've probably noticed that the nomenclature for system functions is to follow the name of the call with a set of empty parentheses for example, open(). You will see this nomenclature in use whenever system calls are discussed.

    The fstat() and fstat64() calls obtains information about open files - "fstat" refers to "file status". As you might expect, this information is retrieved from the files' inodes, including whether or not you are allowed to read the files' contents. If you trace the ls command (i.e., truss ls), for example, your trace will start with lines that resemble these:

    1 execve("/usr/bin/ls", 0x08047BCC, 0x08047BD4) argc = 1

    2 open("/dev/zero", O_RDONLY) = 3

    3 mmap(0x00000000, 4096, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE, 3, 0) = 0xDFBFA000

    4 xstat(2, "/usr/bin/ls", 0x08047934) = 0

    5 open("/var/ld/ld.config", O_RDONLY) Err#2 ENOENT

    6 sysconfig(_CONFIG_PAGESIZE) = 4096

    7 open("/usr/lib/libc.so.1", O_RDONLY) = 4

    8 fxstat(2, 4, 0x08047310) = 0

    ...

    28 lstat64(".", 0x080478B4) = 0

    29 open64(".", O_RDONLY|O_NDELAY) = 3

    30 fcntl(3, F_SETFD, 0x00000001) = 0

    31 fstat64(3, 0x0804787C) = 0

    32 brk(0x08057208) = 0

    33 brk(0x08059208) = 0

    34 getdents64(3, 0x08056F40, 1048) = 424

    35 getdents64(3, 0x08056F40, 1048) = 0

    36 close(3) = 0

    In line 31, we see a call to fstat64, but what file is it checking? The man page for the fstat() and your intuition are probably both telling you that this fstat call is obtaining information on the file opened two lines before � "." or the current directory - and that it is referring to this file by its file handle (3) returned by the open() call in line

    2. Keep in mind that a directory is simply a file, though a different variety of file, so the same system calls are used as would be used to check a text file.

    You will probably also notice that the file being opened is called /dev/zero (again, see line 2). Most Unix sysadmins will immediately know that /dev/zero is a special kind of file - primarily because it is stored in /dev. And, if moved to look more closely at the file, they

    will confirm that the file that /dev/zero points to (it is itself a symbolic link) is a special character file. What /dev/zero provides to system programmers, and to sysadmins if they care to use it, is an endless stream of zeroes. This is more useful than might first appear.

    To see how /dev/zero works, you can create a 10M-byte file full of zeroes with a command like this:

    /bin/dd < /dev/zero > zerofile bs=1024 seek=10240 count=1

    This command works well because it creates the needed file with only a few read and write operations; in other words, it is very efficient.

    You can verify that the file is zero-filled with od.

    # od -x zerofile

    0000000 0000 0000 0000 0000 0000 0000 0000 0000

    *

    50002000

    Each string of four zeros (0000) represents two bytes of data. The * on the second line of output indicates that all of the remaining lines are identical to the first.

    Looking back at the truss output above, we cannot help but notice that the first line of the truss output includes the name of the command that we are tracing. The execve() system call executes a process. The first argument to execve() is the name of the file from which the new process

    image is to be loaded. The mmap() call which follows maps the process image into memory. In

    other words, it directly incorporates file data into the process address space. The getdents64() calls on lines 34 and 35 are extracting information from the directory file - "dents" refers to "directory entries'.

    The sequence of steps that we see at the beginning of the truss output executing the entered command, opening /dev/zero, mapping memory and so on - looks the same whether you are tracing ls, pwd, date or restarting Apache. In fact, the first dozen or so lines in your truss output will be nearly identical regardless of the command you are running. You should, however, expect to see some differences between different Unix systems and different versions of Solaris.

    Viewing the output of truss, you can get a solid sense of how the operating system works. The same insights are available if you are tracing your own applications or troubleshooting third party executables.

    -------------------

    Sandra Henry-Stocker

    Recommended Links

    See sepatate page Unix System Calls Links

    Google matched content

    Softpanorama Recommended

    Top articles

    Sites



    Etc

    Society

    Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

    Quotes

    War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

    Bulletin:

    Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

    History:

    Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

    Classic books:

    The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater�s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

    Most popular humor pages:

    Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

    The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


    Copyright � 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

    FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

    This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

    You can use PayPal to to buy a cup of coffee for authors of this site

    Disclaimer:

    The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

    Last modified: November 22, 2020