Reading the scroll from En-Gedi

From damage to discovery via virtual unwrapping: Reading the scroll from En-Gedi

 

Science Advances  21 Sep 2016:
Vol. 2, no. 9, e1601247
DOI: 10.1126/sciadv.1601247

Abstract

Computer imaging techniques are commonly used to preserve and share readable manuscripts, but capturing writing locked away in ancient, deteriorated documents poses an entirely different challenge. This software pipeline—referred to as “virtual unwrapping”—allows textual artifacts to be read completely and noninvasively. The systematic digital analysis of the extremely fragile En-Gedi scroll (the oldest Pentateuchal scroll in Hebrew outside of the Dead Sea Scrolls) reveals the writing hidden on its untouchable, disintegrating sheets. Our approach for recovering substantial ink-based text from a damaged object results in readable columns at such high quality that serious critical textual analysis can occur. Hence, this work creates a new pathway for subsequent textual discoveries buried within the confines of damaged materials.

Keywords

  • Micro-CT
  • virtual unwrapping
  • segmentation
  • digital restoration
  • visualization
  • digital flattening

INTRODUCTION

In 1970, archeologists made a dramatic discovery at En-Gedi, the site of a large, ancient Jewish community dating from the late eighth century BCE until its destruction by fire circa 600 CE. Excavations uncovered the synagogue’s Holy Ark, inside of which were multiple charred lumps of what appeared to be animal skin (parchment) scroll fragments (12). The Israel Antiquities Authority (IAA) faithfully preserved the scroll fragments, although in the 40 years following the discovery, no one produced a means to overcome the irreversible damage they had suffered in situ. Each fragment’s main structure, completely burned and crushed, had turned into chunks of charcoal that continued to disintegrate every time they were touched. Without a viable restoration and conservation protocol, physical intervention was unthinkable. Like many badly damaged materials in archives around the world, the En-Gedi scroll was shelved, leaving its potentially valuable contents hidden and effectively locked away by its own damaged condition (Fig. 1).

Fig. 1The charred scroll from En-Gedi.

Image courtesy of the Leon Levy Dead Sea Scrolls Digital Library, IAA. Photo: S. Halevi.

 

 

 

The implementation and application of our computational framework allows the identification and scholarly textual analysis of the ink-based writing within such unopened, profoundly damaged objects. Our systematic approach essentially unlocks the En-Gedi scroll and, for the first time, enables a total visual exploration of its interior layers, leading directly to the discovery of its text. By virtually unwrapping the scroll, we have revealed it to be the earliest copy of a Pentateuchal book ever found in a Holy Ark. Furthermore, this work establishes a restoration pathway for damaged textual material by showing that text extraction is possible while avoiding the need for injurious physical handling. The restored En-Gedi scroll represents a significant leap forward in the field of manuscript recovery, conservation, and analysis.

Virtual unwrapping

Our generalized computational framework for virtual unwrapping applies to a wide range of damaged, text-based materials. Virtual unwrapping is the composite result of segmentation, flattening, and texturing: a sequence of transformations beginning with the voxels of a three-dimensional (3D) unstructured volumetric scan of a damaged manuscript and ending with a set of 2D images that reveal the writing embedded in the scan (36). The required transformations are initially unknown and must be solved by choosing a model and applying a series of constraints about the known and observable structure of the object. Figure 2 shows the final result for the scroll from En-Gedi. This resultant image, which we term the “master view,” is a visualization of the entire surface extracted from within the En-Gedi scroll.

Fig. 2Completed virtual unwrapping for the En-Gedi scroll.

 

 

 

The first stage, segmentation, is the identification of a geometric model of structures of interest within the scan volume. This process digitally recreates the “pages” that hold potential writing. We use a triangulated surface mesh for this model, which can readily support many operations that are algorithmically convenient: ray intersection, shape dynamics, texturing, and rendering. A surface mesh can vary in resolution as needed and forms a piecewise approximation of arbitrary surfaces on which there may be writing. The volumetric scan defines a world coordinate frame for the mesh model; thus, segmentation is the process of aligning a mesh with structures of interest within the volume.

The second stage, texturing, is the formation of intensity values on the geometric model based on its position within the scan volume. This is where we see letters and words for the first time on the recreated page. The triangulated surface mesh offers a direct approach to the texturing problem that is similar to solid texturing (78): Each point on the surface of the mesh is given an intensity value based on its location in the 3D volume. Many approaches exist for assigning intensities from the volume to the triangles of the segmented mesh, some of which help to overcome noise in the volumetric imaging and incorrect localization in segmentation.

The third stage, flattening, is necessary because the geometric model may be difficult to visualize as an image. Specifically, if text is being extracted, it will be challenging to read on a 3D surface shaped like the cylindrical wraps of scrolled material. This stage solves for a transformation that maps the geometric model (and associated intensities from the texturing step) to a plane, which is then directly viewable as a 2D image for the purpose of visualization.

In practice, this framework is best applied in a piecewise fashion to accurately localize a scroll’s highly irregular geometry. Also, the methodology required to map each of these steps from the original volume to flattened images involves a series of algorithmic decisions and approximations. Because textual identification is the primary goal of our virtual unwrapping framework, we tolerate mathematical and geometric error along the way to ensure that we extract the best possible images of text. Hence, the final merging and visualization step is significant not only for composing small sections into a single master view but also for checking the correctness and relative alignments of individual regions. Therefore, it is crucial to preserve the complete transformation pipeline that maps voxels in the scan volume to final pixels in the unwrapped master view so that any claim of extracted text can be independently verified.

The volumetric scan

The unwrapping process begins by acquiring a noninvasive digitization that gives some representation of the internal structure and contents of an object in situ (911). There are a number of choices for noninvasive, penetrative, and volumetric scanning, and our framework places no limits on the modality of the scan. As enhancements in volumetric scanning methodology [for example, phase-contrast microtomography (612)] occur, we can take advantage of the ensuing potential for improved images. Whatever the scanning method, it must be appropriate to the scale and to the material and physical properties of the object.

Because of the particularities of the En-Gedi scroll, we used x-ray–based micro–computed tomography (micro-CT). The En-Gedi scroll’s damage creates a scanning challenge: How does one determine the correct scan protocol before knowing how ink will appear or even if the sample contains ink at all? It is the scan and subsequent pipeline that reveal the writing. After several calibration scans, a protocol was selected that produced a visible range of intensity variation on the rolled material. The spatial resolution was adjusted with respect to the sample size to capture enough detail through the thickness of each material layer to reveal ink if present and detectable. The chemical composition of the ink within the En-Gedi scroll remains unknown because there are no exposed areas suitable for analysis. However, the ink response within the micro-CT scan is denser than other materials, implying that it likely contains metal, such as iron or lead.

Any analysis necessitates physical handling of the friable material, and so, even noninvasive methods must be approached with great care. Although low-power x-rays themselves pose no significant danger to inanimate materials, the required transport and handling of the scroll make physical conservation and preservation an ever-present concern. However, once acquired, the volumetric scan data become the basis for all further computations, and the physical object can be returned to the safety of its protective archive.

Segmentation

Segmentation, which is the construction of a geometric model localizing the shape and position of the substrate surface within the scan on which text is presumed to appear, is challenging for several reasons. First, the surface as presented in the scanned volume is not developable, that is, isometric to a plane (1315). Although an isometry could be useful as a constraint in some cases, the skin forming the layers of the En-Gedi scroll has not simply been folded or rolled. Damage to the scroll has deformed the shape of the skin material, which is apparent in the 3D scanned volume, making such a constraint unworkable. Second, the density response of animal skin in the volume is noisy and difficult to localize with methods such as marching cubes (16). Third, layers of the skin that are close together create ambiguities that are difficult to resolve from purely local, shape-based operators. Figure 3 shows four distinct instances where segmentation proves challenging because of the damage and unpredictable variation in the appearance of the surface material in the scan volume.

Fig. 3Segmentation challenges in the En-Gedi scroll, based on examples in the slice view.

Double/split layering and challenging cell structure (left), ambiguous layers with unknown material (middle left), high-density “bubbling” on the secondary layer (middle right), and gap in the primary layer (right).

 

 

 

Our segmentation algorithm applied to the En-Gedi scroll builds a triangulated surface mesh that localizes a coherent section of the animal skin within a defined subvolume through a novel region-growing technique (Fig. 4). The basis for the algorithm is a local estimate of the differential geometry of the animal skin surface using a second-order symmetric tensor and associated feature saliency measures (17). An initial set of seed points propagates through the volume as a connected chain, directed by the local symmetric tensor and constrained by a set of spring forces. The movement of this particle chain through the volume traces out a surface over time. Figure 5 shows how crucially dependent the final result is on an accurate localization of the skin. When the segmented geometry drifts from the skin surface (Fig. 5A), the surface features disappear. When the skin is accurately localized (Fig. 5B), the surface detail, including cracks and ink evidence, becomes visible.

Fig. 4A portion of the segmented surface and how it intersects the volume.

 

 

Fig. 5The importance of accurate surface localization on the final generated texture.

(A) Texture generated when the surface is only partially localized. (B) Texture generated when surface is accurately localized.

 

 

 

The user can tune the various parameters of this algorithm locally or globally based on the data set and at any time during the segmentation process. This allows for the continued propagation of the chain without the loss of previously segmented surface information. The segmentation algorithm terminates either at the user’s request or when a specified number of slices have been traversed by all of the particles in the chain.

The global structure of the entire surface is a piecewise composition of many smaller surface sections. Although it is certainly possible to generate a global structure through a single segmentation step, approaching the problem in a piecewise manner allows more accurate localization of individual sections, some of which are very challenging to extract. Although the segmented surface is not constrained to a planar isometry at the segmentation step, the model implicitly favors an approximation of an isometry. Furthermore, the model imposes a point-ordering constraint that prevents sharp discontinuities and self-intersections. The segmented surface, which has been regularized, smoothed, and resampled, becomes the basis for the texturing phase to visualize the surface with the intensities it inherits from its position in the volume.

Texturing

Once the layers of the scroll have been identified and modeled, the next step is to render readable textures on those layers. Texturing is the assignment of an intensity, “or brightness,” value derived from the volume to each point on a segmented surface. The interpretation of intensity values in the original volumetric scan is maintained through the texturing phase. In the case of micro-CT, intensities are related to density: Brighter values are regions of denser material, and darker values are less dense (18). A coating of ink made from iron gall, for example, would appear bright, indicating a higher density in micro-CT. Our texturing method is similar to the computer graphics approach of “solid texturing,” a procedure that evaluates a function defined over R3 for each point to be textured on the embedded surface (78). In our case, the function over R3 is simply a lookup to reference the value (possibly interpolated) at that precise location in the volume scan.

In an ideal case, where both the scanned volume and localized surface mesh are perfect, a direct mapping of each surface point to its 3D volume position would generate the best possible texture. In practice, however, errors in surface segmentation combined with artifacts in the scan create the need for a filtering approach that can overcome these sources of noise. Therefore, we implement a neighborhood-based directional filtering method, which gives parametric control over the texturing. The texture intensity is calculated from a filter applied to the set of voxels within each surface point’s local neighborhood. The parameters (Fig. 6) include use of the point’s surface normal direction (directional or omnidirectional), the shape and extent of the local texturing neighborhood, and the type of filter applied to the neighborhood. The directional parameter is particularly important when attempting to recover text from dual-sided materials, such as books. In such cases, a single segmented surface can be used to generate both the recto and verso sides of the page. Figure 7 shows how this texturing method improves ink response in the resulting texture when the segmentation does not perfectly intersect the ink position on the substrate in the volumetric scan.

Fig. 6The geometric parameters for directional texturing.

 

 

Fig. 7The effect of directional texturing to improve ink response.

(Left) Intersection of the mesh with the volume. (Right) Directional texturing with a neighborhood radius of 7 voxels.

 

 

Flattening

Region-growing in an unstructured volume generates surfaces that are nonplanar. In a scan of rolled-up material, most surface fragments contain high-curvature areas. These surfaces must be flattened to easily view the textures that have been applied to them. The process of flattening is the computation of a 3D to 2D parameterization for a given mesh (61920). One straightforward assumption is that a localized surface cannot self-intersect and represents a coherent piece of substrate that was at one time approximately isometric to a plane. If the writing occurred on a planar surface before it was rolled up, and if the rolling itself induced no elastic deformations in the surface, then damage is the only thing that may have interrupted the isometric nature of the rolling.

We approach parameterization through a physics-based material model (42122). In this model, the mesh is represented as a mass-spring system, where each vertex of the mesh is given a mass and the connections between vertices are treated as springs with associated stiffness coefficients. The mesh is relaxed to a plane through a balanced selection of appropriate forces and parameters. This process mimics the material properties of isometric deformation, which is analogous to the physical act of unwrapping.

A major advantage of a simulation-based approach is the wide range of configurations that are possible under the framework. Parameters and forces can be applied per vertex or per spring. This precise control allows for modeling of not only the geometric properties of a surface but also the physical properties of that surface. For example, materials with higher physical elasticity can be represented as such within the same simulation.

Although this work relies on computing parameterizations solely through this simulation-based method, a hybrid approach that begins with existing parameterization methods [for example, least-squares conformal mapping (LSCM) (23) and angle-based flattening (ABF) (24)] followed by a physics-based model is also workable. The purely geometric approaches of LSCM and ABF produce excellent parameterizations but have no natural way to capture additional constraints arising from the mesh as a physical object. By tracking the physical state of the mesh during parameterization via LSCM or ABF, a secondary correction step using the simulation method could then be applied to account for the mesh’s physical properties.

Merging and visualization

The piecewise nature of the pipeline requires a final merge step. There can be many individually segmented mesh sections that must be reconciled into a composite master view. The shape, location in the volume, and textured appearance of the sections aid in the merge. We take two approaches to the merging step: texture and mesh merging.

Texture merging is the alignment of texture images from small segmentations to generate a composite master view. This process provides valuable user feedback when performed simultaneously with the segmentation process. Texture merging builds a master view that gives quick feedback on the overall progress and quality of segmentation. However, because each section of geometry is flattened independently, the merge produces distortions that are acceptable as an efficiently computed draft view, but must be improved to become a definitive result for the scholarly study of text.

Mesh merging refers to a more precise recalculation of the merge step by using the piecewise meshes to generate a final, high-quality master view. After all segmentation work is complete, individual mesh segmentations are merged in 3D to create a single surface that represents the complete geometry of the segmented scroll. The mesh from this new merged segmentation is then flattened and textured to produce a final master view image. Because mesh merging is computationally expensive compared to texture merging, it is not ideal for the progressive feedback required during segmentation of a scan volume. However, as the performance of algorithms improves and larger segmented surfaces become practical, it is likely that mesh merging will become viable as a user cue during the segmentation process.

Maintaining a provenance chain is an important component of our pipeline. The full set of transformations used to generate a final master view image can be referenced so that every pixel in a final virtually unwrapped master view can be mapped back to the voxel or set of voxels within the volume that contributed to its intensity value. This is important for both the quantitative analysis of the resulting image and the verification of any extracted text. Figure 8demonstrates the ability to select a region and point of interest in the texture image and invert the transformation chain to recover original 3D positions within the volume.

Fig. 8Demonstration of stored provenance chain.

The generated geometric transformations can map a region and point of interest in the master view (left) back to their 3D positions within the volume (right).

 

 

RESULTS

Using this pipeline, we have restored and revealed the text on five complete wraps of the animal skin of the En-Gedi scroll, an object that likely will never be physically opened for inspection (Fig. 1). The resulting master image (Fig. 2) enables a complete textual critique, and although such analysis is beyond the scope of this paper, the consensus of our interdisciplinary team is that the virtually unwrapped result equals the best photographic images available in the 21st century. From the master view, one can clearly see the remains of two distinct columns of Hebrew writing that contain legible and countable lines, letters, and words (Fig. 9).

Fig. 9Partial transcription and translation of recovered text.

(Column 1) Lines 5 to 7 from the En-Gedi scroll.

 

 

 

These images reveal the En-Gedi scroll to be the book of Leviticus, which makes it the earliest copy of a Pentateuchal book ever found in a Holy Ark and a significant discovery in biblical archeology (Fig. 10). Without our computational pipeline and the textual analysis it enables, the En-Gedi text would be totally lost for scholarship, and its value would be left unknown.

Fig. 10Timeline placing the En-Gedi scroll within the context of other biblical discoveries.

 

 

 

What is clearly preserved in the master image is part of one sheet of a scripture scroll that contains 35 lines, of which 18 have been preserved and another 17 have been reconstructed. The lines contain 33 to 34 letters and spaces between letters; spaces between the words are indicated but are sometimes minimal. The two columns extracted from the scroll also exhibit an intercolumnar blank space, as well as a large blank space before the first column that is larger than the column of text. This large blank space leaves no doubt that what is preserved is the beginning of a scroll, in this case a Pentateuchal text: the book of Leviticus.

Armed with the extraction of this readable text and its historical context discerned from carbon dating and other related archeological evidence (12), scholars can accurately place the En-Gedi writings in the canonical timeline of biblical text. Radiocarbon results date the scroll to the third or fourth century CE (table S1). Alternatively, a first or second century CE date was suggested on the basis of paleographical considerations by Yardeni (25). Dating the En-Gedi scroll to the third or fourth century CE falls near the end of the period of the biblical Dead Sea Scrolls (third century BCE to second century CE) and several centuries before the medieval biblical fragments found in the Cairo Genizah, which date from the ninth century CE onward (Fig. 10). Hence, the En-Gedi scroll provides an important extension to the evidence of the Dead Sea Scrolls and offers a glimpse into the earliest stages of almost 800 years of near silence in the history of the biblical text.

As may be expected from our knowledge of the development of the Hebrew text, the En-Gedi Hebrew text is not vocalized, there are no indications of verses, and the script resembles other documents from the late Dead Sea Scrolls. The text deciphered thus far is completely identical with the consonantal framework of the medieval text of the Hebrew Bible, traditionally named the Masoretic Text, and which is the text presented in most printed editions of the Hebrew Bible. On the other hand, one to two centuries earlier, the so-called proto-Masoretic text, as reflected in the Judean Desert texts from the first centuries of the Common era, still witnesses some textual fluidity. In addition, the En-Gedi scan revealed columns similar in length to those evidenced among the Dead Sea Scrolls.

DISCUSSION

Besides illuminating the history of the biblical text, our work on the scroll advances the development of textual imaging. Although previous research has successfully identified text within ancient artifacts, the En-Gedi manuscript represents the first severely damaged, ink-based scroll to be unrolled and identified noninvasively. The recent work of Barfod et al. (26) produced text from within a damaged amulet; however, the text was etched into the amulet’s thin metal surface, which served as a morphological base for the contrast of text. Although challenging, morphological structures provide an additional guide for segmentation that is unlikely to be present with ink-based writing. In the case of the En-Gedi scroll, for instance, the ink sits on the substrate and does not create an additional morphology that can aid the segmentation and rendering process. The amulet work also performed segmentation by constraining the surface to be ruled, and thus developable, to simplify the flattening problem. In addition, segmented strips were assembled showing letterforms, but a complete and merged surface segmentation was not computed, a result of using commercial software rather than implementing a custom software framework.

Samko et al. (27) describe a fully automated approach to segmentation and text extraction of undamaged, scrolled materials. Their results, from known legible manuscripts that can be physically unrolled for visual verification, serve as important test cases to validate their automated segmentation approach. However, the profound damage exhibited in the materials, such as the scroll from En-Gedi, creates its own new challenges—segmentation, texturing, and flattening algorithms—that only our novel framework directly addresses.

The work of Mocella et al. (12) claims that phase-contrast tomography generates contrast at ink boundaries in scans of material from Herculaneum. The hope for phase contrast comes from a progression of volumetric imaging methods (56) and serves as a possible solution to the first step in our pipeline: creating a noninvasive, volumetric scan with some method that shows contrast at ink. Although verifying that ink sits on a page is not enough to allow scholarly study of discovered text, this is an important prelude to subsequent virtual unwrapping. Our complete approach makes such discovery possible.

An overarching concern as this framework becomes widely useful has to do not with technical improvements of the components, which will naturally occur as scientists and engineers innovate over the space of scanning, segmentation, and unwrapping, but rather with the certified provenance of every final texture claim that is produced from a scan volume. An analysis framework must offer the ability for independent researchers to confidently affirm results and verify scholarly claims. Letters, words, and, ultimately, whole texts that are extracted noninvasively from the inaccessible inner windings of an artifact must be subject to independent scrutiny if they are to serve as the basis for the highest level of scholarship. For such scrutiny to occur, accurate and recorded geometry, aligned in the coordinate frame of the original volumetric scan, must be retained for confirmation.

The computational framework we have described provides this ability for a third party to confirm that letterforms in a final output texture are the result of a pipeline of transformations on the original data and not solely an interpretation from an expert who may believe letterforms are visible. Such innate third-party confirmation capability reduces confusion around the resulting textual analyses and engenders confidence in the effectiveness of virtual unwrapping techniques.

The traditional approach of removing a folio from its binding—or unrolling a scroll—and pressing it flat to capture an accurate facsimile obviously will not work on fragile manuscripts that have been burned and crushed into lumps of disintegrating charcoal. The virtual unwrapping that we performed on the En-Gedi scroll proves the effectiveness of our software pipeline as a noninvasive alternative: a technological engine for text discovery in the face of profound damage. The implemented software components, which are necessary stages in the overall process, will continue to improve with every new object and discovery. However, the separable stages themselves, from volumetric scanning to the unwrapping and merging transformations, will become the guiding framework for practitioners seeking to open damaged textual materials. The geometric data passing through the individual stages are amenable to a standard interface through which the software components can interchangeably communicate. Hence, each component is a separable piece that can be individually upgraded as alternative and improved methods are developed. For example, the accurate segmentation of layers within a volume is a crucial part of the pipeline. Segmentation algorithms can be improved by tuning them to the material type (for example, animal skin, papyrus, and bark), the expected layer shape (for example, flat and rolled pages), and the nature of damage (for example, carbonized, burned, and fragmented). The flattening step is another place where improvements will better support user interaction, methods to quantify and visualize errors from flattening, and a comparative analysis between different mapping schemes.

The successful application of our virtual unwrapping pipeline to the En-Gedi scroll represents a confluence of physics, computer science, and the humanities. The technical underpinnings and implemented tools offer a collaborative opportunity between scientists, engineers, and textual scholars, who must remain crucially involved in the process of refining the quality of extracted text. Although more automation in the pipeline is possible, we have now achieved our overarching goal, which is the creation of a new restoration pathway—a way to overcome damage—to reach and retrieve text from the brink of oblivion.

MATERIALS AND METHODS

Master view results

The master view image of the En-Gedi scroll was generated using the specific algorithms and processes outlined below. Because we use the volumetric scan as the coordinate frame for all transformations in our pipeline, the resolution of the master view approximately matches that of the scan. The spatial resolution of the volume (18 μm/voxel, isometric) produces an image resolution of approximately 1410 pixels per inch (25,400 μm/inch), which can be considered archival quality. From this, we estimate the surface area of the unwrapped portion to be approximately 94 cm2 (14.57 in2). The average size of letterforms varies between 1.5 and 2 mm, and the pixels of the master view maintain the original dynamic range of 16 bits.

Volumetric scan

Two volumetric scans were performed using a Bruker SkyScan 1176 in vivo Micro-CT machine. It uses a PANalytical microfocus tube and a Princeton Instruments camera. With a maximum spatial resolution of 9 μm per voxel, it more than exceeded the resolution requirements for the En-Gedi scroll. A spatial resolution of 18 μm was used for all En-Gedi scans. Additionally, because this is an in vivo machine, the scroll could simply be placed within the scan chamber and did not need to be mounted for scanning. This limited the risk of physical damage to the object.

An initial, single field-of-view scan was done on the scroll to verify the scan parameters and to confirm that the resolution requirements had been met. This scan was performed at 50 kV, 500 μA, and 350-ms exposure time, with added filtration (0.2 Al) to improve image quality by absorbing lower-energy x-rays that tend to produce scattering. The reconstructions showed very clear separation of layers within the scroll, which indicated a good opportunity for segmentation. The scan protocol was then modified to increase contrast, where the team suspected that there may be visible ink. To acquire data from as much of the scroll as possible, the second and final scan was an offset scan using four connected scans. Final exposure parameters of 45 kV, 555 μA, and 230 ms were selected for this scan. The data were reconstructed using Bruker SkyScan’s NRecon engine, and the reconstructed slices were saved as 16-bit TIFF images for further analysis.

Segmentation

The basis for the algorithm is a local estimate of the differential geometry of the animal skin surface using a second-order symmetric tensor and associated feature saliency measures (17). The tensor-based saliency measures are available at each point in the volume. The 3D structure tensor for point p is calculated asEmbedded Image(1)where ∇u(p) is the 3D gradient at point pgσ is a 3D Gaussian kernel, and denotes convolution. The eigenvalues and eigenvectors of this tensor provide an estimate of the surface normal at pEmbedded Image(2)

The algorithm begins with an ordered chain of seed points localized to a single slice by a user. From the starting point, each particle in the chain undergoes a force calculation that propagates the chain forward through the volume. This region-growing algorithm estimates a new positional offset for each particle in the chain based on the contribution of three forces: gravity (a bias in the direction of the long axis of the scroll), neighbor separation, and the saliency measure of the structure tensor. First, n(p) is biased along the long axis of the scroll by finding the vector rejection of an axis-aligned unit vector Z onto n(p)Embedded Image(3)

To keep particles moving together, an additional restoring spring force S is computed using a spring stiffness coefficient k and the elongation factors Xl and Xr between the particle and both its left and right neighbors in the chainEmbedded Image(4)

In the final formulation, a scaling factor α is applied to G, and the final positional offset for the particle is the normalized summation of all termsEmbedded Image(5)

The intuition behind this framework is the following: The structure tensor estimates a surface normal, which gives a hint at how a layer is moving locally through points (Eqs. 1 and 2). The layer should extend in a direction that is approximated locally by its tangent plane: the surface normal. The gravity vector nudges points along the major axis around which the surface is rolled, which is a big hint about the general direction to pursue to extend a surface. Moving outward, away from the major axis and across layers, generally defeats the goal of following the same layer. The spring forces help to maintain a constant spacing between points, restraining them from moving independently. These forces must all be balanced relative to one another, which is done by trial and error.

We applied the computed offset iteratively to each particle, resulting in a surface mesh sampled at a specific resolution relative to the time step of the particle system. The user can tune the various parameters of this algorithm based on the data set and at any time during the segmentation process. We also provided a feedback interface whereby a human user can reliably identify a failed segmentation and correct for mistakes in the segmentation process.

For each small segmentation, we used spring force constants of 0.5 and an α scaling factor of 0.3. Segmentation chain points had an initial separation of 1 voxel. This generally produced about six triangles per 100 μm in the segmented surface models. When particles crossed surface boundaries because the structure tensor did not provide a valid estimate of the local surface normal, the chain was manually corrected by the user.

Texturing

Two shapes were tested for the shape of the texturing neighborhood: a spheroid and a line. The line shape includes only those voxels that directly intersect along the surface normal. When the surface normal is accurate and smoothly varying, the line neighborhood allows parametric control for the texture calculation to incorporate voxels that are near but not on (or within) the surface. The line neighborhood leads to faster processing times and less blurring on the surface, although the spheroid neighborhood supports more generalized experimentation—the line neighborhood is a degenerate spheroid. We settled on bidirectional neighborhoods (voxels in both the positive and negative direction) using a line neighborhood with a primary axis length of 7 voxels. We filtered the neighborhood using a Max filter because the average ink response (density) in the volume was much brighter than the ink response of the animal skin.

Flattening

Our flattening implementation makes use of Bullet Physics Library’s soft body simulation, which uses position-based dynamics to solve the soft body system. Points along one of the long edges of the segmentation were pinned in place while a gravity force along the x axis was applied to the rest of the body. This roughly “unfurled” the wrapping and aligned the mesh with the xz plane. A gravity force along the y axis was then applied to the entire body, which pushed the mesh against a collision plane parallel to the xz plane. This action flattened the curvature of the mesh against the plane. A final expansion and relaxation step was applied to smooth out wrinkles in the internal mesh.

Merging

In total, around 140 small segmentations were generated during the segmentation process. These segmentations were then mesh-merged to produce seven larger segmentations, a little over one for each wrap of the scroll. Each large segmentation was then flattened and textured individually. The final set of seven texture images was then texture-merged to produce the final master view imagery for this paper. All merging steps for this work were performed by hand. Texture merges were performed in Adobe Photoshop, and mesh merging was performed in MeshLab. An enhancement curve was uniformly applied to the merged master view image to enhance visual contrast between the text and substrate.

SUPPLEMENTARY MATERIALS

Supplementary material for this article is available athttp://advances.sciencemag.org/cgi/content/full/2/9/e1601247/DC1

table S1. Radiocarbon dating results of the En-Gedi scroll (25).

This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial license, which permits use, distribution, and reproduction in any medium, so long as the resultant use is not for commercial advantage and provided the original work is properly cited.

REFERENCES AND NOTES

Acknowledgments: We thank D. Merkel (Merkel Technologies Ltd.) who donated the volumetric scan to the IAA. Special thanks to the excavators of the En-Gedi site D. Barag and E. Netzer (Institute of Archaeology at the Hebrew University of Jerusalem). Radiocarbon determination was made at the DANGOOR Research Accelerator Mass Spectrometer (D-REAMS) at the Weizmann Institute. We thank G. Bearman (imaging technology consultant, IAA.) for support and encouragement. W.B.S. acknowledges the invaluable professional contributions of C. Chapman in the editorial preparation of this manuscript. Funding: W.B.S. acknowledges funding from the NSF (awards IIS-0535003 and IIS-1422039). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF. W.B.S. acknowledges funding from Google and support from S. Crossan (Founding Director of the Google Cultural Institute).Author contributions: W.B.S. conceived and designed the virtual unwrapping research program. C.S.P. directed the software implementation team and assembled the final master view. P.S. acquired the volumetric scan of the scroll and its radiocarbon dating and provided initial textual analysis. M.S. and E.T. edited the visible text and interpreted its significance in the context of biblical scholarship. Y.P. excavated the En-Gedi scroll on May 5, 1970. The manuscript was prepared and submitted by W.B.S. with contributions from all authors.Competing interests: The authors declare that they have no competing interests. Data and materials availability: All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials. Additional data related to this paper may be requested from the authors. All scan data and results from this paper are archived at the Department of Computer Science, University of Kentucky (Lexington, KY) and are available athttp://vis.uky.edu/virtual-unwrapping/engedi2016/.

 

View Abstract

Muslim Teen Honored for Chasing Down Attacker of Orthodox Woman – Breaking News – Forward.com

Muslim Teen Honored for Chasing Down Attacker of Orthodox Woman Marcy OsterJanuary 8, 2017 (JTA) — A Muslim teen from New York City who helped police catch a man who hit an Orthodox Jewish woman on the subway was honored by community leaders. Ahmed Khalifa, 17, stopped the Brooklyn-bound train on Dec. 28 so that the woman could get medical attention and then jumped off the train to follow the assailant. The slap broke the woman’s glasses and caused her to lose consciousness. She was removed from the train and taken to a local hospital. Khalifa followed the assailant and contacted the Shomrim Jewish safety patrol who got the police involved. He waited near the bus stop until the police arrived. The police removed the assailant, identified as Rayvon Jones, 31, from the bus. Jones was charged with assault in Brooklyn Criminal Court. State Assemblyman Dov Hikind on Thursday presented Khalifa with a donated laptop computer for college in the fall and a citation praising his actions. Hikind noted that Khalifa is Muslim and that he helped an Orthodox Jewish woman. “In a time of such divisiveness, it’s refreshing to see a story like this resonate within our communities,” said Hikind, who is an Orthodox Jew, and was joined by community faith leaders and politicians in the ceremony. “I’m just a guy. I think everyone should be doing this because we are all one people; I would help anyone out no matter who they are, I’m just happy people are learning that this is the right thing to do,” Khalifa said in a statement, the Brooklyn Eagle reported.

Carbon Capture Breakthrough by an Indian company

Indian firm makes carbon capture breakthrough

Carbonclean is turning planet-heating emissions into profit by converting CO2 into baking soda – and could lock up 60,000 tonnes of CO2 a year

 
Tuticorin thermal power station near the port of Thoothukudi on the Bay of Bengal, southern India. The plant is said to be the first industrial-scale example of carbon capture and utilisation (CCU).
 Tuticorin thermal power station near the port of Thoothukudi on the Bay of Bengal, southern India. The plant is said to be the first industrial-scale example of carbon capture and utilisation (CCU). Photograph: Roger Harrabin

A breakthrough in the race to make useful products out of planet-heating CO2 emissions has been made in southern India.

A plant at the industrial port of Tuticorin is capturing CO2 from its own coal-powered boiler and using it to make baking soda.

Crucially, the technology is running without subsidy, which is a major advance for carbon capture technology as for decades it has languished under high costs and lukewarm government support.

The firm behind the Tuticorin process says its chemicals will lock up 60,000 tonnes of CO2 a year and the technology is attracting interest from around the world.

Debate over carbon capture has mostly focused until now on carbon capture and storage (CCS), in which emissions are forced into underground rocks at great cost and no economic benefit. The Tuticorin plant is said to be the first unsubsidised industrial scale example of carbon capture and utilisation (CCU).

There is already a global market for CO2 as a chemical raw material. It comes mainly from industries such as brewing where it is cheap and easy to capture.

Until now it has been too expensive without subsidy to strip out CO2 from the relatively low concentrations in which it appears in flue gas. The Indian plant has overcome the problem by using a new CO2-stripping chemical.

It is just slightly more efficient than the current CCS chemical amine, but its inventors, Carbonclean, say it also needs less energy, is less corrosive, and requires much smaller equipment meaning the build cost is much lower than for conventional carbon capture.

The new kit has been installed at Tuticorin Alkali Chemicals. The firm is now using the CO2 from its own boiler to make baking soda – a base chemical with a wide range of uses including glass manufacture, sweeteners, detergents and paper products.

The firm’s managing director, Ramachandran Gopalan, told BBC Radio 4: “I am a businessman. I never thought about saving the planet. I needed a reliable stream of CO2, and this was the best way of getting it.” He says the plant now has virtually zero emissions to air or water.

Carbonclean believes capturing usable CO2 can deal with perhaps 5-10% of the world’s emissions from coal. It’s no panacea, but it would be a valuable contribution because industrial steam-making boilers are hard to run on renewable energy.

The inventors of the new process are two young chemists at the Indian Institute of Technology in Kharagpur. They failed to find Indian finance and were welcomed instead by the UK government, which offered grants and the special entrepreneur status that whisks them through the British border.

The firm’s headquarters are now based in London’s Paddington district. Its CEO, Aniruddha Sharma, said: “So far the ideas for carbon capture have mostly looked at big projects, and the risk is so high they are very expensive to finance. We want to set up small-scale plants that de-risk the technology by making it a completely normal commercial option.”

By producing a subsidy-free carbon utilisation project, Carbonclean appears to have something of a global lead. But it is by no means alone. Carbon8 near Bristol is buying in CO2 to make aggregates, and other researchers are working on making plastics and fuels from waste CO2.

At last, it seems, the race to turn CO2 into profit is really on.

Israeli rabbis have approved the practice of polygamy

Israeli rabbis have approved the practice of polygamy to counter what they believe is a demographic threat posed by Arab populations living in Israel and the occupied Palestinian territories.

An expose by Channel 10, an Israeli broadcasting channel, revealed the practice was approved by the rabbinate that has actively encouraged and facilitated polygamy, claiming the practice will give Jews an edge in the demographic race against Arabs in Israel.

One rabbi who has been married for 26 years is filmed by an undercover reporter persuading a single woman to become his second wife.

“If your parents ask you why you don’t marry like everyone else,” he told her, “tell them that it is a mitzvah [religious commandment] and I want to do a mitzvah.”

The rabbi showed the reporter a letter signed by Jerusalem’s Chief Rabbi Shlomo Amar permitting him to marry a second wife.

Read: Israeli rabbis launch war on Christmas tree

Reporting on the story, The Times of Israel commented that “although Jewish law forbids a woman to marry more than one husband, a practice known as polyandry, it does permit a man to marry more than one wife.”

“There are several instances of polygamy in the Bible, including two of the three patriarchs (Abraham and Jacob) and many of the kings. Jewish law gives guidelines as to the circumstances under which polygamy is permitted,” The Times of Israel explained.

The Israeli newspaper also claimed that there are cases outside of Israel, primarily within Sephardic communities, where a husband who refuses to divorce his wife is granted permission to remarry by a rabbi. This leaves the first wife as an aguna, or chained woman, who is forbidden by Jewish law from remarrying.

A spokesperson for a pro-Jewish demographic domination group, The Complete Jewish Home, told Channel 10: “We are dealing with men and women who are responsible, and this is a solution to the problem of having more single women than men seeking marriage. It also ensures the Jewish demographic majority in the country, and guarantees the right of religious women to become mothers.”

Though polygamy has been illegal in Israel since 1977, authorities largely turn a blind eye to the practice.

Read: ‘Return to the Mount’ activists seek destruction of Al-Aqsa

 


 
 

How the Graffiti Boys ignited the Syrian Revolution

How the Graffiti Boys ignited the Syrian Revolution

 
803
SHARES
 

“Ashaab yureed isqat annidham.” This phrase is ringing in the ears of tyrants and despots around the Arab world and means quite simply that the people want to bring down the regime. It is the enduring chant of the Arab Spring, so it’s hardly surprising that these are probably the first words children learn in their cradles as they are rocked to sleep to the beat of this rousing street anthem.

When a group of 11-year-old Syrian boys made their way home from school one day and started larking around, as boys of that age do, it was almost inevitable that among the graffiti they scratched on a partially-collapsed wall would be these iconic words. By revolutionary standards it was an unremarkable act, hardly worthy of mention because the same graffiti can be found on walls in most Arab countries. However, just as hard-up Tunisian fruit seller Mohammed Bouazizi is credited with igniting the Arab Spring with his self-immolation, this long forgotten, single act of childish vandalism lit the touch paper of the Syrian Revolution.


 It was a seminal moment in time that the Arab world’s Leftists would rather you forget; in their frenzied bid to rewrite the history of the Arab Spring they want you to believe that crazed Islamists are hijacking the peoples’ revolution. The Left in Syria, you see, isn’t as cuddly as the splintered socialist groups in Britain. These are hard-line fundamentalist, religion-hating secularists who have no room, not even a square inch, for religion in their world; not for themselves and not for anyone else.

While the people in Tunisia and Egypt fought for freedom from tyranny they also wanted the freedom to re-engage with their faith. Hence, to the shock and horror of the Arab Left, the people voted for trusted groups like the Muslim Brotherhood. It was something that the Left had never considered; they dismissed the Muslim vote as a figment of the imagination; never once did they ever imagine that Muslim groups would form political parties or even want to engage in democracy. And in Libya, although some of the more fundamentalist Islamic groups failed to secure the popular vote other Islamic flavoured parties were swept in to power.

So Syria, you see, is probably the Arab Left’s last chance at having a revolution free from religion. This is most likely the reason for their opposition to the revolution from the very outset because they knew for sure that it would carry a strong religious flavour. Well, sorry to disappoint them. I crossed the length and breadth of Syria shortly before the revolution and saw most communities, Christian and Muslim alike, holding tight to their faith. Whatever shape their revolution will take, the future will be dominated by believers.

But let me return to the 18 boys at the beginning of this story because it is vitally important that we all remember exactly how the revolution in Syria began. It did not begin with CIA interference, nor an influx of foreign fighters, Al-Qaida, rebranded weapons from the West, NATO or a global call across the Muslim world for jihad.This was a reluctant revolution, a revolution forced on the people by the acts of an evil, malevolent regime.

In fact, though, while the 18 boys may have loaded the revolutionary gun way back in February 2011, the trigger was pulled by a man called Atif Najeeb, a cousin of Syrian President Bashar Al-Assad. Within two hours of the schoolboy prank, Atif instigated raids on every single one of the boys’ homes: armed police, the military and the ubiquitous security officials stormed every home at precisely the same time, demanding that the children be handed over. Amid the drama there was hand-wringing, cries from mothers, pleas from fathers to take the place of their sons and general confusion and chaos.

Distraught as news swept Dar’a, in the south-west of Syria less than six miles from the Jordanian border, the parents and their relatives gathered outside his office, but Atif Najeeb along with Faisal Kalthoum the governor of Dar’a, refused to meet any one of them.

For four days the families waited but not a single scrap of news came out about the fate of their sons. Eventually, a delegation made up of family members, local imams, the local headteacher and other dignitaries assembled and once again demanded to see Najeeb or the governor. After three hours they were herded into a room to meet the governor who remained seated while deliberately keeping the delegation on their feet. Culturally, this is a huge insult in the Arab world. At this point no one had even an inkling of what the boys had done to deserve their fate. The parents’ pleas to have their children returned were ignored and the governor advised them to forget about them.

He allegedly said: “My advice to you is that you forget you ever had these children. Go back home and sleep with your wives and bring other children into the world and if you can not do that, then bring your wives to us and we will do the job for you.”

By this time families in towns and villages across the region were shocked and outraged by what had happened and began to demonstrate and rally to show their support for the boys, their families and the town of Dar’a. They included local people from Dayr Al-Zawr, Idlib and Homs. While some did suggest that it was time for a revolution, the families kept to only two demands: the return of their children and the sacking of the governor for his crude and inflammatory remarks.

As the pressure mounted on Kalthoum, a helicopter full of military thugs was flown in to Dar’a to quell the unrest and during clashes with local citizens several people from Dar’a were killed. They went to their graves not knowing what the children had done to incur the wrath of the governor.

Eighteen days later, when it was clear that the parents and families would not be appeased until their children were returned, the boys were released. Their condition was pitiful and shocking; all were traumatised beyond recognition. All had had their fingernails torn off. One had lost an eye while several had fractured skulls and all had at least one broken limb. Today, those boys still bear the whip marks and scars on their bodies which bear testimony to the brutal nature of their detention and torture. Several of them still have nightmares recalling the screeches and screams of their fellow inmates.

Far from calming the situation, the physical evidence that the boys had been tortured enraged the people of Dar’a who made their own two demands: the dismissal of the governor and justice delivered to those who had done such wicked things to the boys including Atif Najeeb and his torture squad.

The relatively low level demands carried on for the next six months and those making them resisted calls for a full-blown revolution and offers of outside intervention; there were many in the Arab world who wanted to take up arms in support of their brothers and sisters in Dar’a and the dozens of Syrian cities and towns now in full revolutionary mode. Moreover, while insisting that their reasonable demands be met, some of the families pleaded for calm and even argued that Assad could not possibly have known or allowed this atrocity to happen. Surely, a London-graduated doctor and Ophthalmologist could not have consented to this barbarism, they argued.

By August the death toll across Syria had reached 1,000 and then the foreign fighters arrived, not to help the people of Dar’a but to destroy their spirit and morale. The fighters were mercenaries from numerous neighbouring and distant countries including former Soviet satellite states who, in the pay of the Assad regime, embarked on a killing and raping spree.

The plan was to subdue the spreading uprising and instill fear in the lives of the Syrian people, those who dared protest and those who were considering joining the growing crowds on the streets. Instead, the gates of Hell were opened and talks of compromise and low level demands gave way to screams of “Ashaab yureed isqat annidham”.

As news of the atrocities in Dar’a and other Syrian cities reached Damascus some senior officers in the military could no longer stomach what was being done in their name. They defected from the regime and formed what is now known as the Free Syrian Army. It’s not an army of outsiders; it was founded by Syrian officers and grew in popularity and prominence with the media because of its name.

Speculation is rife about the emergence of Al-Qaida, foreign jihadists, support from Arab countries, subversive tactics by Arab countries, infiltration by the CIA and Mossad, just about everyone, in fact, bar the Free Wales Army. Some of the speculation is true, some is not, but don’t allow anyone to rewrite the history of the start of the Syrian Revolution.

One day peace will come to Syria and when it does the Graffiti Boys should be remembered and their names should go up on another wall in Dar’a – a wall where their names can be carved with pride.

Some of them may not survive the war but some will finally enjoy the taste of freedom. Today I salute them and remember each one by name and urge you to remember them too, so that when Syria’s history is written in full they will not be forgotten: 1) Muawiya Faisal Sayasneh 2) Yusuf Adnan Sweidan 3) Samer Ali Sayasneh 4) Ahmed Jihad Abazeid 5) Isa Hasan Abulqayyas 6) Ala Mansour Irsheidat Abazeid 7) Mustafa Anwar Abazeid 8) Nidhal Anwar Abazeid 9) Akram Anwar Abazeid 10) Nayef Muwaffaq Abazeid 11) Basheer Farooq Abazeid 12) Ahmed Thani Irshiedat Abazeid 13) Ahmed Shukri Al-Akram 14) Abdulrahman Nayef Al-Reshedat 15) Mohammed Ayman Munwer Al-Karrad 16) Ahmed Nayef Al-Resheidat Abazeid 17) Nabeel Imad Al-Resheidat Abazeid 18) Mohammed Ameen Yasin Al-Resheidat Abazeid.

DN Jha’s ‘The Myth of the Holy Cow’ examined facts, and its detractors didn’t like that.

It’s easy to see why the Right wanted this book about Indians’ beef-eating history to be banned

DN Jha’s ‘The Myth of the Holy Cow’ examined facts, and its detractors didn’t like that.

Right next to my school in Chennai, there used to be a hole-in-the-wall eatery that served as a rite of passage for most of us who’d hit high school. It resembled more a shotgun shack rather than a respectable dining establishment, the kitchen walls seemed covered in soot, so that you could hardly see what was going on in there. The only sensory stimuli was aural, provided by a furious wok and a spatula. The menu was limited, and everybody more or less ordered the same thing, a plate of beef fry along with a beef fried rice.

In retrospect, I don’t think it constituted a fine culinary experience, but there certainly was some subversive relish in the whole activity, and while beef isn’t as ubiquitous in Tamil Nadu as it is in Kerala, it wasn’t all that hard to come by in Chennai. By the time a few years had passed, I had been benumbed of the novelty of these clandestine beef runs and was wholly converted, eating the bovine ilk at any given opportunity.

It was at this juncture that I found myself in Delhi, hankering for beef in a city that offered few such oases. I was directed to a Malayali mess in Hauz Khas, one where the beef fry was the only item on the menu written in Malayalam. I proceeded to place my order in what I felt was an acceptable amplitude for a mess, only to be met with a circumspect stare from the waiter, who trotted off to the kitchen hastily and returned to drop it on my table with the gravitas of a Cold War era dead drop.

In 2010, when I was a university student in Bangalore, the Karnataka government threatened to pass a new law, enacted in “The Prevention of Slaughter and Preservation of Cattle Bill 2010”, but this was eventually rejected by the former governor of the state HR Bhardwaj, who felt the Bill “adversely affects lakhs of people’s lives and lacks legislative competence.” It was around this time that I first heard the historian and academic DN Jha’s name in passing for the first time.

History, not hate

Jha’s Myth Of The Holy Cow was never intended to be provocative. It was merely an academic inquiry into the historical beef-eating dietary habits on the Subcontinent, one that was buttressed with ample evidence from ancient texts and sources. However, many publishers refused to take on the heat of brining it out, and after it was finally published in 2001 by Matrix Books, it was received with the usual symphony of death threats and brickbats from the fundamentalist Hindutva right.

The book promptly faced a court injunction, and would eventually resurface only in 2009, published this time by Navayana with a pop art cover whose appealing shelf presence would probably betray the sober, academic content of the book itself. Jha’s central premise is that the practice of eating beef is hardly a foreign concept brought to India by Islamic and Christian influences, and that people from the Vedic era not only ritually sacrificed the cow, but also relished its meat.

Jha meticulously brings this out by drawing our attention to various ancient texts – the Vedas, epics like the Ramayana, and Buddhist and Jain scriptures. Jha also charts out society’s transformation from a pastoralist, nomadic group to settled agriculturalists, where the the status of the cow was elevated to that a beast of burden.

However, the cow hadn’t yet become holy, with neither the Laws of Manu (which prohibit eating cows but condones their slaughter by Brahmins) nor the concept of ahimsa in Buddhist and Jain thought elevating the animal to a hallowed status. Beef-eating eventually became taboo in the medieval period among upper caste Hindus, but the cow was yet to be canonised, as this happened much later in the 19th century, when various cow protection groups galvanised themselves on this nebulous notion of a shared identity.

As Jha puts it “The holiness of the cow is elusive. For there has never been a cow goddess, nor any temple in her honour. Nevertheless the veneration of this animal has come to be viewed as a characteristic trait of modern day non-existent monolithic ‘Hinduism’ bandied about by the Hindutva forces.”

Unholy nexus

In the past couple of years, we have seen a Muslim man murdered on suspicion of storing beef in UP, two Muslim cattle herders lynched in public in Jharkhand, and Dalits flogged for carrying cow carcasses in Gujarat. To understand the morbid paranoia that has taken over the Hindu right, we must look to the words of one of its seers, a Chitpavan Brahmin and former sarsanghchalak of the RSS, MS Golwalkar: “The expression ‘communalism of the majority’ is totally wrong and misconceived. In a democracy the opinion of the majority has to hold sway in the day-to-day life of the people. As such it will be but proper to consider the practical conduct of the life of the majority as the actual life of the national entity.”

We’ve never come closer to reaching Golwalkar’s perverse ideal in the history of the Indian republic than now, but the seeds of this hysteria have always been around. To cite a small instance in the realm of art, take Girish Karnad’s masterful Tabbaliyu Neenade Magane, a film adaptation of SL Bhyrappa’s novel of the same name. Here we see an irate mob of villagers gathering around the feudal mansion of Kalinga Gowda, whose American wife Hilda orders the slaughter of the household cow and eventually eats it, which results in their being outcast by the community.

In direct opposition to Golwalkar’s parochial idea of the republic stands BR Ambedkar, whose insightful and highly readable essay, Untouchability and the Dead Cow, finds it way to the Navayana edition of Jha’s book. As the chief framer of the Indian Constitution, he managed to slip in the crucial Article 29 and 30 to counter such unsophisticated partisan instincts. It reads: “Any section of the citizens residing in the territory of India or any part thereof having a distinct language, script or culture of its own shall have the right to conserve the same.”

Ambedkar further elucidated this deliberate inclusion by saying, “It is also used to cover minorities which are not minorities in the technical sense, but which are nonetheless minorities in the cultural and linguistic sense…if there is a cultural minority which wants to preserve its language, its script and its culture, the State shall not by law impose upon it any other culture which may be either local or otherwise.”. Ultimately, the Malayali heading to the Kerala mess for a beef fry in Delhi is far more in line with the Constitution than the man stoking the tinderbox of cow protection in his community over what his neighbour is allegedly storing in the freezer.

We welcome your comments at letters@scroll.in.

Bhopal Fake Encounter

After several incriminating videos of the so-called ‘encounter’ (a classic Indian euphemism for cold-blooded, execution-style murder by cops), now come some damning stills!

Surely, it is just a coincidence that the killer and the killed are wearing the same sports shoes, same make and colour. Not fishy at all, no?

But the big question is: Since jails don’t allow belts/shoes/watches, how/where did #SIMI men find these? While walking to the jungle? In the shoe closet at the three heavily-guarded gates they crossed? Or the shoe rack on top of the jail’s watch tower? Or did they first walk from the jail to the ATS squad office, raid their sports shoes supplies, then quietly marched to the jungle, huddled together atop a hill, waiting to be killed?

Let’s not even ask the other questions! Between the 8 of them, two allegedly had desi, crude pistols – how did they manage to get these? Or for that matter the knives? From some corrupt prison staff? Or found them lying on the road they walked on? Or through an accomplice, waiting for them outside? In which case, why was the accomplice stupid enough not to get an SUV or a van to drive them a couple of hundred miles away at night, maybe even take them outside the state and/or drop them to a railway station, for them to vanish on separate long-distance trains?

Police reports say the escapees fired 2 rounds at them while they fired 47 rounds. So, the escapees got 2 desi guns with only one bullet each? How stupid of them!

A video shows a cop ‘recovering’ a knife from an escapee’s waistband. A shiny broad-blade and 5-6 inches long. How did this escapee manage to walk with this knife tucked inside his (new) jeans, without slicing his groin or cutting open his thigh? Warning: don’t try this at home, but without walking, just try & tuck in a kitchen knife Into your waistband!

The jailbreak itself is the stuff of legend. Like Jai-Veeru in Sholay with the angrezon ke zamaaney ke jailer.

Let’s accept scene 1 of the MP cops’ screenplay. That the escapees attacked a warden. Then stole the keys from him to open other cells to free their friends. But they forgot to free their own leader Abu Faisal alias Doctor from a nearby cell? How clumsy and stupid, but let’s say they were in too much of a hurry!

But how did they manage from there? The staffer inside the cell doesn’t carry keys to the entire jail and all its entry/ main & intervening security gates, but only for his block. Any movie-goer knows that!

So, how did the escapees cross gate 1, 2 and 3? Did these gates have no security or armed guards? The cops’ screenplay suggests they had made keys from wood and metal spoons. When? That night itself? Or well in advance? Either way, the jail staff on duty and armed guards just looked on as these escapees stood around gate 1 for hours, filing and chiseling the spoon, fitting & trying it and finally making a duplicate key? And then, they repeated the process at gates 2 and 3?

The screenplay now suggests that they climbed (multiple sets of?) 30 foot high walls by making a “ladder out of bedsheets”. Imagine 1 man climbing the rope-knotted-bedsheets and reaching the top of the first wall. Then what – he jumps down thirty feet, landing perfectly on his brand-new sports shoes? Or climbs down another bedsheet-rope that he has carried to the top? Now it is the turn of escapees 2,3,4,5,6,7,8 to repeat this mountaineering feat.

No police or jail staff spots them during the long time it took them to do this? No one from the watch tower? No armed guards see them? And – no CCTV camera capture this remarkable Olympian achievement?

If this were a Bollywood movie, we would’ve walked out saying “kya bakwaas script hai”! Even Bollywood veterans like Ajay Devgn, Raveena Tandon, Anupam Kher and Paresh Rawal would agree. But this screenplay carries on.

The escapees come out of the ISO-certified Bhopal Central jail. What do they do next? They go for a leisurely walk, covering less than 1.5 km/ hour for 7-8 hours. Not in any hurry. No sense of urgency to escape. And they decide to stick together, all 8 of them. Not split into 2 teams of 4 each or 4 teams of 2 each and go off in different directions, making their own capture difficult. DI’d they never watch TV or any films – and were they just too stupid to take these basic, elementary steps? In that case, they seem to be too foolish to be “dreaded terrorists” or any sort of “masterminds”!

The script now says that apart from 2 desi guns, the escapees get some knives – for what? Hand-to-hand combat with the heavily-armed ATS men? An IGP on camera suggests that his men have sustained injuries, but not from the only 2 bullets supposedly fired at them. So, knife injuries? How? Maybe once the escapees had been shot dead and the ATS chaps went nearer to check, the corpses’ hands twitched and jerked involuntarily but inflicted precise non life threatening injuries. Even in death, they were considerate enough to not slice open an artery or a vein – such kind-hearted terrorist?!

Triumphant IGP and valiant cops speak of the ‘encounter’ proudly on camera, telling us how bravely they tackled these men. Gushing TV anchors, wearing flack jackets and in combat mode themselves, inform us that 8 terrorists have been “neutralised” – never mind that these were undertrials, yet to be convicted for any offence, let alone for terrorism, by any court.

A Madhya Pradesh minister slips up and admits that the ‘terrorists’ had no gun, only some kitchen utensils they’d sharpened into makeshift knives. He blows apart the whole brave-cops-under-enemy-fire-reluctantly-retaliate-and kill, without sustaining any injuries themselves. The IGP also contradicts himself.

And then the worst happens – a video apparently shot by the local sarpanch shows us a plainclothes cop ‘recovering’ a knife-machete, another brave one aiming at a figure lying on the ground and firing in cold blood while off-screen voices shower some abuse and even acknowledge that someone is shooting it all on video.

And then comes a second video.
And a third video.
And a fourth video.
All of them are on YouTube for anyone to browse.

Showing us 5 men atop a hillock, no guns or weapons in sight. An independent eyewitness journalist writes to CM Chouhan – they had no way to escape – on the other side was a sheer drop of a couple of hundred feet. They were surrounded. Killed in cold blood.

Finally – Dear Madhya Pradesh Government, ATS & Police, you need to hire better scriptwriters; this is such a terrible story full of large, gaping holes! Learn from the #GujaratModel. 20-30 “encounters”, clearer plotlines & savvier ‘spin’ management! Despite that, they made history by having the largest number of IPS officers & cops arrested & jailed in independent India! And – never have your Home Minister ‘monitor’ the encounters directly – it may land him in jail too!

As a film-maker, let me tell you that cinema works on the basis of “willing suspension of disbelief”. But even in that context, your story, script, screenplay & dialogues are terrible! And people in real life don’t embrace disbelief readily, unless they’re your cadres or blind Bhakts.

The citizens will ask questions, challenge your narratives, criticize you, keep you in check as is mandated in the Indian Constitution under the chapter titled Fundamental Duties!

– Rakesh Sharma

Image may contain: one or more people

HIV was present in New York in 1970

 

Image: Colorized electron microscope image of the HIV virus.

The human immunodeficiency virus, which causes AIDS. BSIP/UIG Via Getty Images

A new genetic study confirms theories that the global epidemic of HIV and AIDS started in New York around 1970, and it also clears the name of a gay flight attendant long vilified as being “Patient Zero.”

Researchers got hold of frozen samples of blood taken from patients years before the human immunodeficiency virus (HIV) that causes AIDS was ever recognized, and teased out genetic material from the virus from that blood.

They use it to show that HIV was circulating widely during the 1970s, and certainly before people began noticing a “gay plague” in New York in the early 1980s.

 Flashback: Radical AIDS Activist Group ACT UP 1:05

“We can date the jump into the U.S. in about 1970 and 1971,” Michael Worobey, an expert on the evolution of viruses at the University of Arizona, told reporters in a telephone briefing.

“HIV had spread to a large number of people many years before AIDS was noticed.”

Their findings also suggest HIV moved from New York to San Francisco in about 1976, they report in the journal Nature.

Image: A laboratory technician examines blood samples for HIV/AIDS in a public hospital in Valparaiso city
A laboratory technician examines blood samples for HIV Eliseo Fernandez / Reuters, file

“New York City acts as a hub from which the virus moves to the west coast,” Worobey said.

Their findings confirm widespread theories that HIV first leapt from apes to humans in Africa around the beginning of the 20th century and circulated in central Africa before hitting the Caribbean in the 1960s. The genetic evidence supports the theory that the virus came from the Caribbean, perhaps Haiti, to New York in 1970. From there it spread explosively before being exported to Europe, Australia and Asia.

HIV now infects more than 36 million people worldwide. About 35 million have died from AIDS, according to theUnited Nations AIDS agency. Two million are infected every year and more than 1 million died of it last year.

Related: New HIV Vaccine Will be Tested in South Africa

In the United States, more than 1.2 million people have HIV, and about 50,000 people are newly infected each year. More than two dozen medications now on the market can keep infected people healthy, and can prevent infection, but there is no vaccine and there is no cure.

Researchers now know a lot about HIV and how it attacks the body’s immune cells, gradually destroying the immune system. It takes about 10 years or so for untreated and undiagnosed patients to realize something is badly wrong as they develop AIDS-defining illnesses such as Kaposi’s sarcoma, a rare form of cancer, or pneumonia, or as they succumb to tuberculosis.

 35 Years Ago: the First AIDS Patients Were Diagnosed 1:54

It infects men, women and children alike, is passed to newborns from infected mothers, and spreads like wildfire in infected needles shared by drug users.

But in the early 1980s in New York and San Francisco, all people knew was that something mysterious was killing gay men.

“In New York City, the virus encountered a population that was like dry tinder, causing the epidemic to burn hotter and faster and infecting enough people that it grabs the world’s attention for the first time,” Worobey said in a statement.

“Death and dying was such a part of our daily lives,” said Sean Strub, a writer and activist who has been HIV positive since the 1980s.

Related: Drugs Make HIV Hard to Transmit

Once the virus was identified, researchers could start working on treatments and diagnostic tests. And specialists like Worobey could look at the genetics of the virus itself to find its origins.

But no one had samples of HIV infected blood from before the 1980s.

Worobey’s team got nine samples and sequenced the RNA from eight of them. It wasn’t easy to get and was damaged, so they used new techniques similar to those used to reconstruct DNA from ancient Neanderthal remains.

The different mutations in the genetic material — DNA from people, RNA from the virus — can provide a genetic clock. The more diversity, the more time a virus has been circulating and acquiring small mutations.

“That information is stamped into the RNA of the virus from 1970,” Worobey said.

“Our analysis shows that the outbreaks in California that first caused people to ring the alarm bells and led to the discovery of AIDS were really just offshoots of the earlier outbreak that we see in New York City.”

Related: AIDS Activists Still Fight Stigma

Other experts not involved in the research called it impressive.

“For the first time, they found and characterized samples that had a date on the tube that preceded the discovery of the disease,” said Dr. Beatrice Hahn, who studies the genetics and evolution of HIV and other infectious agents at the University of Pennsylvania.

“They had tubes with 1978, 1979 written on them,” she added. “That was something very powerful.”

If someone directly infects someone else with a virus, usually it’s easy to tell from the genetic sequence. Samples taken from each person will be very close genetically, if not identical. Because the HIV viruses taken from the different samples showed many different mutations, that shows HIV had been infecting many people for quite a time.

Related: Black Gay, Bisexual Men Have 50 Percent Risk of HIV

“This means this virus was in this country way before the disease was recognized,” Hahn said.

“By the time people discovered the disease as a completely new entity that came out of the blue and killed people left and right for really unknown reasons, the disease had already been in this country for 10 years prior. That’s a little sobering.”

Using their genetic clock, Worobey’s team showed the virus infecting men in the 1970s in San Francisco had fewer mutations. “Compared to NYC, the San Francisco epidemic in 1978 appeared to have been established more recently,” they wrote.

“This suggests that the bulk of the HIV-1 infections in San Francisco in 1978 traced back to a single introduction from New York City in around 1976.”

The Worobey team also sequenced samples of virus taken from Gaetan Dugas, a Canadian flight attendant named as “Patient Zero.” Dugas died in 1984 and stunned researchers when he told them he’d had about 250 sexual partners a year between 1979 and 1981, although it later became clear that was not uncommon.

Related: Southern, Gay Men Have High HIV Rate

The sequences make it clear he was a victim of an epidemic that had already been raging, and not its originator, Worobey said.

“It’s shocking how this man’s name has been sullied and destroyed by this incorrect history,” said Peter Staley, a former Wall Street bond trader who became an AIDS activist in New York in the 1980s.

“He was not Patient Zero and this study confirms it through genetic analysis,” Staley told NBC News.

“No one should be blamed for the spread of viruses,” Worobey said.

He said he hopes the techniques his team used can help explain the patterns of other epidemics, such as Ebola, SARS and Zika.

“I think in many ways this paper may be one of the last pieces, if not the last piece in the puzzle,” Hahn said. “I am a believer of putting the last nail into the coffin. And I believe that is what was done here.” 

Fats and figures

Fats and figures

32Fats

Meet the doctor who is changing the way we eat

  • “The focus has been on cholesterol, weight and burning calories—it’s all fatally flawed. The root cause driving heart disease and diabetes is insulin resistance. What drives insulin resistance is a diet that is high in sugar and refined carbohydrates” – Dr Aseem Malhotra

Dr Aseem Malhotra orders a double helping of cheese. At Li Veli, an Italian bistro in Covent Garden, he picks a plate of Italian cheeses as a starter and then tucks into aubergine Parmigiana, a gratin with mozzarella and Parmesan.

This isn’t a “sod the diet” day, though. Malhotra is a cardiologist, and this is how he thinks we should all eat. He puts grass-fed butter on his vegetables, and extra-virgin olive oil on everything else. And in his new documentary, The Big Fat Fix, he sets out why fat is not the enemy but sugar is, and how refined carbohydrates—white bread and white pasta—are false friends, to be consumed only in moderation.

“Some people have an outdated fear of fat,” the 38-year-old says. “It’s nonsense. We’ve got better data than we had years ago when it was said that fat was the problem. Full-fat, non-processed dairy is good for the heart and fat keeps you fuller for longer.”

What about the old bogeyman of the food industry, saturated fat? “There are different types. Extra virgin olive oil—amazing for health—has about 14 to 20 per cent saturated fat. We should move towards food-based, not food-group, guidelines.”

Malhotra dismisses the current health consensus: “The focus has been on cholesterol, weight and burning calories—it’s all fatally flawed. The root cause driving heart disease and diabetes is insulin resistance. Insulin is a hormone that helps control glucose levels in the blood. What drives insulin resistance is a diet that is high in sugar and refined carbohydrates.”

In The Big Fat Fix, Malhotra travels to Pioppi, southern Italy, where residents enjoy longevity and a healthy old-age with low rates of heart disease, diabetes and dementia. Britain’s bastardisation of the Italian diet means we think pizza and pasta. It doesn’t. It means oily fish and lots of vegetables. Pizza is a once-a-month treat; pasta is a starter. And in Pioppi, even a rare pudding is cooked in olive oil.

35AseemMalhotraDr Aseem Malhotra | Alamy

The Pioppi Protocol should be our dietary model, Malhotra says. His first advice to patients (he works at the Lister Hospital in Stevenage) is that they should eat a handful of nuts a day. Then they should cut all processed and refined sugar. They should never buy anything marked low-fat but should eat lots of vegetables and oily fish, which is high in Omega 3. Counting calories is out, too. “It’s the quality of the calories you are eating that matters, not the number,” he asserts.

The effects, he says, are dramatic: “I don’t mean weight loss—although you may lose weight as a side-effect—I mean with health. We should focus on health; the weight will correct itself.”

Even the slim should heed this advice. Many have the “illusion of protection”, Malhotra says, because they aren’t struggling to button up their jeans. “Many of my patients’ measure of success is their weight and doctors focus too much on it, too. There’s no such thing as a healthy weight. Forty per cent of people with a normal BMI will still get lifestyle diseases. The biggest risk factor for them is waist circumference.”

Malhotra was not always this way. He used to eat sugary cereal for breakfast. Finding himself starving at 11am, he would reach for a KitKat. For lunch he might have pasta, while dinner could be a curry with lots of rice. Since changing his diet, he has lost a stone—and it is all from around the waist. What changed his view was seeing, as a junior doctor, the pressure on England’s National Health Service. “There was ever more misery, ever more people who were overweight or with type II diabetes.” He believed our dietary advice must be wrong, started investigating and noted conflicts of interest in the promotion of a low-fat diet. “I have been eating and sleeping this for five years.”

35Thefocus

The Big Fat Fix also looks at exercise (there is even a training montage). Previously, Malhotra was a drive-to-the-gym-and-pound-the-treadmill type. But then he spoke to orthopaedic surgeons, who said they were seeing people in their forties needing hip and knee replacements, and that no one should run on the pavement, or even treadmills.

Now he does a lot of squats, and focuses on compound movement. “Try not to sit for more than 45 minutes at your desk. Just stretch for 15 seconds or do a squat. For heart health, keep moving. Do what you enjoy: whether it’s cycling or walking.” In Pioppi there are no gyms. “If you look at the Mediterranean culture in the film, they walk everywhere. These people are living until 90!” He also reckons we should have more sex—to reduce the odds of heart disease, obviously.

He is frustrated that so much focus is on working off calories. “It angers me that people are measuring how many they burn on a treadmill. The body doesn’t work that way. The amount you burn from exercise is minimal compared to what people eat. If you want to put on weight, we tell you to exercise because it increases appetite.”

He calls for a revolution in how we think about energy from food. “About 75 per cent of the calories we burn are just for keeping your organs going. I need energy for my brain, my heart. I don’t want to be fuelling that with stuff that’s going to give me heart disease and type II diabetes. The idea that it doesn’t matter where the calories come from, you can just burn it off, is nonsense.”

Malhotra’s mission now is to take these messages to the masses. “A doctor’s duty goes beyond individual patients, to the whole population.”

34doctorprescribes

What the doctor prescribes 
* Fat is not the enemy: cheese and butter are off the banned list, and olive oil and nuts are your dietary new best friends 
* Stop counting calories. What really matters is the quality of the calories you eat 
* Cut out processed and refined sugar. You can still eat fruit, but try to get the majority of your five-a-day from vegetables 
* Reduce your consumption of refined carbohydrates by eating as the Italians actually do—pasta is not a main course and pizza is a treat 
* If you sit at a desk all day, set a timer and every 45 minutes do 15 seconds of stretching, a squat or walk to the water cooler 
* To combat stress, get 10 minutes a day (you can break it up) of deep breathing or tai chi exercises, where you switch off

How to Crash Systemd in a 50 character command

How to Crash Systemd in One Tweet

The following command, when run as any user, will crash systemd:

NOTIFY_SOCKET=/run/systemd/notify systemd-notify ""

After running this command, PID 1 is hung in the pause system call. You can no longer start and stop daemons. inetd-style services no longer accept connections. You cannot cleanly reboot the system. The system feels generally unstable (e.g. ssh and su hang for 30 seconds since systemd is now integrated with the login system). All of this can be caused by a command that’s short enough to fit in a Tweet.

Edit (2016-09-28 21:34): Some people can only reproduce if they wrap the command in a while true loop. Yay non-determinism!

The bug is remarkably banal. The above systemd-notify command sends a zero-length message to the world-accessible UNIX domain socket located at /run/systemd/notify. PID 1 receives the message and fails an assertion that the message length is greater than zero. Despite the banality, the bug is serious, as it allows any local user to trivially perform a denial-of-service attack against a critical system component.

The immediate question raised by this bug is what kind of quality assurance process would allow such a simple bug to exist for over two years (it was introduced in systemd 209). Isn’t the empty string an obvious test case? One would hope that PID 1, the most important userspace process, would have better quality assurance than this. Unfortunately, it seems that crashes of PID 1 are not unusual, as a quick glance through the systemd commit log reveals commit messages such as:

Systemd’s problems run far deeper than this one bug. Systemd is defective by design. Writing bug-free software is extremely difficult. Even good programmers would inevitably introduce bugs into a project of the scale and complexity of systemd. However, good programmers recognize the difficulty of writing bug-free software and understand the importance of designing software in a way that minimizes the likelihood of bugs or at least reduces their impact. The systemd developers understand none of this, opting to cram an enormous amount of unnecessary complexity into PID 1, which runs as root and is written in a memory-unsafe language.

Some degree of complexity is to be expected, as systemd provides a number of useful and compelling features (although they did not invent them; they were just the first to aggressively market them). Whether or not systemd has made the right trade-off between features and complexity is a matter of debate. What is not debatable is that systemd’s complexity does not belong in PID 1. As Rich Felker explained, the only job of PID 1 is to execute the real init system and reap zombies. Furthermore, the real init system, even when running as a non-PID 1 process, should be structured in a modular way such that a failure in one of the riskier components does not bring down the more critical components. For instance, a failure in the daemon management code should not prevent the system from being cleanly rebooted.

In particular, any code that accepts messages from untrustworthy sources like systemd-notify should run in a dedicated process as a unprivileged user. The unprivileged process parses and validates messages before passing them along to the privileged process. This is called privilege separation and has been a best practice in security-aware software for over a decade. Systemd, by contrast, does text parsing on messages from untrusted sources, in C, running as root in PID 1. If you think systemd doesn’t need privilege separation because it only parses messages from local users, keep in mind that in the Internet era, local attacks tend to acquire remote vectors. Consider Shellshock, or the presentation at this year’s systemd conference which is titled “Talking to systemd from a Web Browser.”

Systemd’s “we don’t make mistakes” attitude towards security can be seen in other places, such as this code from the main() function of PID 1:

/* Disable the umask logic */
if (getpid() == 1)
        umask(0);

Setting a umask of 0 means that, by default, any file created by systemd will be world-readable and -writable. Systemd defines a macro called RUN_WITH_UMASK which is used to temporarily set a more restrictive umask when systemd needs to create a file with different permissions. This is backwards. The default umask should be restrictive, so forgetting to change the umask when creating a file would result in a file that obviously doesn’t work. This is called fail-safe design. Instead systemd is fail-open, so forgetting to change the umask (which has already happened twice) creates a file that works but is a potential security vulnerability.

The Linux ecosystem has fallen behind other operating systems in writing secure and robust software. While Microsoft was hardening Windows and Apple was developing iOS, open source software became complacent. However, I see improvement on the horizon. Heartbleed and Shellshock were wake-up calls that have led to increased scrutiny of open source software. Go and Rust are compelling, safe languages for writing the type of systems software that has traditionally been written in C. Systemd is dangerous not only because it is introducing hundreds of thousands of lines of complex C code without any regard to longstanding security practices like privilege separation or fail-safe design, but because it is setting itself up to be irreplaceable. Systemd is far more than an init system: it is becoming a secondary operating system kernel, providing a log server, a device manager, a container manager, a login manager, a DHCP client, a DNS resolver, and an NTP client. These services are largely interdependent and provide non-standard interfaces for other applications to use. This makes any one component of systemd hard to replace, which will prevent more secure alternatives from gaining adoption in the future.

Consider systemd’s DNS resolver. DNS is a complicated, security-sensitive protocol. In August 2014, Lennart Poettering declared that “systemd-resolved is now a pretty complete caching DNS and LLMNR stub resolver.” In reality, systemd-resolved failed to implement any of the documented best practices to protect against DNS cache poisoning. It was vulnerable to Dan Kaminsky’s cache poisoning attack which was fixed in every other DNS server during a massive coordinated response in 2008 (and which had been fixed in djbdns in 1999). Although systemd doesn’t force you to use systemd-resolved, it exposes a non-standard interface over DBUS which theyencourage applications to use instead of the standard DNS protocol over port 53. If applications follow this recommendation, it will become impossible to replace systemd-resolved with a more secure DNS resolver, unless that DNS resolver opts to emulate systemd’s non-standard DBUS API.

It is not too late to stop this. Although almost every Linux distribution now uses systemd for their init system, init was a soft target for systemd because the systems they replaced were so bad. That’s not true for the other services which systemd is trying to replace such as network management, DNS, and NTP. Systemd offers very few compelling features over existing implementations, but does carry a large amount of risk. If you’re a system administrator, resist the replacement of existing services and hold out for replacements that are more secure. If you’re an application developer, do not use systemd’s non-standard interfaces. There will be better alternatives in the future that are more secure than what we have now. But adopting them will only be possible if systemd has not destroyed the modularity and standards-compliance that make innovation possible.

Hi, I’m Andrew. I’m the founder of SSLMate, a service which automates your SSL certificate deployment. I also develop open source projects like git-crypt and titus.

I blog here about a variety of technology topics, including security, devops, IPv6, and reliable programming. If you liked this post, check out my other posts or subscribe to my Atom feed.

My email address is andrew@agwa.name. I’m AGWA at GitHub and @__agwa on Twitter.

Clinton Email: We Must Destroy Syria For Israel

Posted on June 18, 2016 by Sean Adl-Tabatabai in News, US // Comments ()
Leaked Clinton email reveals that Clinton ordered war against Syria to benefit Israel

A leaked Hillary Clinton email confirms that the Obama administration, with Hillary at the helm, orchestrated a civil war in Syria to benefit Israel.

The new Wikileaks release shows the then Secretary of State ordering a war in Syria in order to overthrow the government and oust President Assad, claiming it was the “best way to help Israel”.

Newobserveronline.com reports:

The document was one of many unclassified by the US Department of State under case number F-2014-20439, Doc No. C05794498, following the uproar over Clinton’s private email server kept at her house while she served as Secretary of State from 2009 to 2013.

Although the Wikileaks transcript dates the email as December 31, 2000, this is an error on their part, as the contents of the email (in particular the reference to May 2012 talks between Iran and the west over its nuclear program in Istanbul) show that the email was in fact sent on December 31, 2012.
Arjun Kapoor Lost 25 Kg’s In 4 Weeks For…
Garcinia Forte

Business News You Can Use
Bloomberg Quint

The Latest Share Market And…
Bloomberg Quint

1 Proven Way To Make Rs. 8200/Day
CareerTimes

Revealed: See How Kollam Girl Got 5…
Fit Mom Daily

He Makes Over Rs. 6 000 Per Day!
CareerTimes

This Indian Dating Site Is Turning The…
Top Dating Sites India

Skin Gets 200% Whiter…
Indian Beauty Tips

When Plastic Surgery Goes Wrong
Safe or Dangerous

Indian Seniors: You Have To Visit This…
Senior Dating India

Ads by Revcontent

The email makes it clear that it has been US policy from the very beginning to violently overthrow the Syrian government—and specifically to do this because it is in Israel’s interests.

clinton-email-syria-israel
“The best way to help Israel deal with Iran’s growing nuclear capability is to help the people of Syria overthrow the regime of Bashar Assad,” Clinton forthrightly starts off by saying.

Even though all US intelligence reports had long dismissed Iran’s “atom bomb” program as a hoax (a conclusion supported by the International Atomic Energy Agency), Clinton continues to use these lies to “justify” destroying Syria in the name of Israel.

She specifically links Iran’s mythical atom bomb program to Syria because, she says, Iran’s “atom bomb” program threatens Israel’s “monopoly” on nuclear weapons in the Middle East.

If Iran were to acquire a nuclear weapon, Clinton asserts, this would allow Syria (and other “adversaries of Israel” such as Saudi Arabia and Egypt) to “go nuclear as well,” all of which would threaten Israel’s interests.

Therefore, Clinton, says, Syria has to be destroyed.

Iran’s nuclear program and Syria’s civil war may seem unconnected, but they are. What Israeli military leaders really worry about — but cannot talk about — is losing their nuclear monopoly.

An Iranian nuclear weapons capability would not only end that nuclear monopoly but could also prompt other adversaries, like Saudi Arabia and Egypt, to go nuclear as well. The result would be a precarious nuclear balance in which Israel could not respond to provocations with conventional military strikes on Syria and Lebanon, as it can today.

If Iran were to reach the threshold of a nuclear weapons state, Tehran would find it much easier to call on its allies in Syria and Hezbollah to strike Israel, knowing that its nuclear weapons would serve as a deterrent to Israel responding against Iran itself.

It is, Clinton continues, the “strategic relationship between Iran and the regime of Bashar Assad in Syria” that makes it possible for Iran to undermine Israel’s security.

This would not come about through a “direct attack,” Clinton admits, because “in the thirty years of hostility between Iran and Israel” this has never occurred, but through its alleged “proxies.”

The end of the Assad regime would end this dangerous alliance. Israel’s leadership understands well why defeating Assad is now in its interests.

Bringing down Assad would not only be a massive boon to Israel’s security, it would also ease Israel’s understandable fear of losing its nuclear monopoly.

Then, Israel and the United States might be able to develop a common view of when the Iranian program is so dangerous that military action could be warranted.

Clinton goes on to asset that directly threatening Bashar Assad “and his family” with violence is the “right thing” to do:

In short, the White House can ease the tension that has developed with Israel over Iran by doing the right thing in Syria.

With his life and his family at risk, only the threat or use of force will change the Syrian dictator Bashar Assad’s mind.

The email proves—as if any more proof was needed—that the US government has been the main sponsor of the growth of terrorism in the Middle East, and all in order to “protect” Israel.

It is also a sobering thought to consider that the “refugee” crisis which currently threatens to destroy Europe, was directly sparked off by this US government action as well, insofar as there are any genuine refugees fleeing the civil war in Syria.

In addition, over 250,000 people have been killed in the Syrian conflict, which has spread to Iraq—all thanks to Clinton and the Obama administration backing the “rebels” and stoking the fires of war in Syria.

The real and disturbing possibility that a psychopath like Clinton—whose policy has inflicted death and misery upon millions of people—could become the next president of America is the most deeply shocking thought of all.

Clinton’s public assertion that, if elected president, she would “take the relationship with Israel to the next level,” would definitively mark her, and Israel, as the enemy of not just some Arab states in the Middle East, but of all peace-loving people on earth.

About Latest Posts
Sean Adl-Tabatabai
Follow me
Sean Adl-Tabatabai
Editor-in-chief at Your News Wire